![]() ![]() ![]() The term captures one of the most thrilling phenomena in nature: complex, unpredictable behaviors emerging from simple natural laws. If anything, these far-fetched claims look like a marketing maneuver - one at odds with the definition of “emergence” used in science for decades. ![]() ![]() A new study from Stanford researchers suggests that “sparks” of intelligence in supposedly “emergent” systems are in fact mirages. However, as computational linguistics expert Emily Bender has pointed out, we’ve been giving AI too much credit since at least the 1960s. This misappropriation of the term “emergent” by AI researchers and boosters deploys language from biology and physics to imply that these programs are uncovering new scientific principles adjacent to basic questions about consciousness - that AIs are showing signs of life. “60 Minutes,” for example, reported credulously that a Google program taught itself to speak Bengali, while the New York Times misleadingly defined “emergent behavior” in AI as language models gaining “ unexpected or unintended abilities” such as writing computer code. One of the boldest, most breathless claims being made about artificial intelligence tools is that they have “emergent properties” - impressive abilities gained by these programs that they were supposedly never trained to possess. ![]()
0 Comments
Leave a Reply. |