The history of AI
I do not think artificial intelligence began with computers. Computers are only the latest costume. The older story starts with a sore back, a bored hand, and a person staring at some task and thinking, surely I can make something else do this.
That thought is not lazy in the petty sense. It is lazy in the civilization-building sense. Laziness built the wheel, pulley, mill, clock, calculator, vending machine, computer, and finally the machine that writes, sorts, predicts, recommends, and sometimes lies with impressive posture. The history of AI is often told as mathematics and code. I see it more as the history of our impatience with effort.
Long before silicon chips, humans loved repeated motion. Cogwheels were an early kind of magic: if one tooth pushed another tooth at the right moment, a hidden world began to move. The Antikythera mechanism, found in a Greek shipwreck and dated to around the second or first century BCE, used bronze gears to predict astronomical cycles. Someone, two thousand years ago, built a hand-cranked cosmos out of metal teeth. We have calendar apps now, but the ancient Greeks had a pocket universe.
Old machines taught people that force could be converted. Falling water became rotation; rotation became grinding. Heat became pressure; pressure became motion. Watermills borrowed the strength of rivers and made flour while the miller avoided crushing grain by hand like a person being punished in a myth. The world was full of trapped work, and technology was the art of unlocking it without asking nicely.
That idea reached a loud, smoky peak with steam. Hero of Alexandria, in the first century CE, described the aeolipile, a little spinning steam device that was more toy than engine. The useful version had to wait for mines, money, metalwork, and stubbornness. In 1712, Thomas Newcomen’s engine helped pump water out of mines. James Watt later improved steam power with a separate condenser, saving fuel and making it practical. Fire began lifting loads, driving machines, and dragging humanity into the Industrial Revolution by the collar.
The Industrial Revolution was not only about factories. It was about replacing muscle with systems. A worker did not need to swing every hammer if a machine could swing it and some clever person could arrange the timing. The human body was no longer the main engine. Once we had begun outsourcing muscles, it was only a matter of time before we looked suspiciously at the brain and asked, what about this thing?
Even before electronic computers, people tried to automate choice and sequence. One of my favorite examples is the vending machine, a kind of automaton if you squint politely. Hero of Alexandria described a device that dispensed holy water when a coin was dropped into it. The coin tipped a pan, opened a valve, released water, and reset the pan. It was not intelligent, but it had a condition and a response: insert coin, receive sacred splash.
Automata also became entertainment. Courts loved mechanical birds, moving statues, and clocks filled with little figures that marched or struck bells. Around 1200, Al-Jazari described water-powered musical automata with parts that could be rearranged to change the performance. The machine’s behavior could be altered without rebuilding it. Before code, there were pegs.
Then came calculators, less theatrical but more insulting to human pride. Blaise Pascal built a mechanical adding machine in the 1640s to help with tax calculations. Gottfried Wilhelm Leibniz later designed a stepped reckoner that could multiply and divide. These machines challenged a serious assumption: arithmetic could be mechanized. A machine did not have to “understand” numbers. It only had to move correctly. Modern AI is more complex, but the old question remains: when a machine gives the right answer, how much do we care whether it understands?
The nineteenth century pushed the dream toward computing. Charles Babbage imagined the Difference Engine and then the Analytical Engine, machines meant to calculate automatically with mechanical parts. Ada Lovelace saw something deeper: such a machine might manipulate not only numbers, but symbols. She even wrote about machines composing music if the rules of music could be expressed properly. That prediction makes the future look late to its own meeting.
The Jacquard loom deserves credit too. In the early 1800s, it used punched cards to control patterns in woven cloth. Holes and no holes became instructions. Later, punch cards shaped computing, especially through Herman Hollerith’s tabulating machines for the 1890 U.S. Census. Cloth, bureaucracy, and computers are not usually invited to the same dinner, but history has poor table manners.
By the twentieth century, the word “computer” still often meant a person who computed. Many were women doing long columns of mathematics for astronomy, ballistics, and engineering. Then electronic machines took over the title. Alan Turing gave the world a mathematical model of computation in 1936 and later asked whether machines could think. ENIAC filled a room and demanded serious electricity. Its programmers plugged cables and set switches by hand. Software, at first, looked a lot like moving furniture.
Artificial intelligence became an official field after the Dartmouth workshop in 1956. Early researchers were bold, sometimes too bold. They thought reasoning, translation, vision, and conversation might fall quickly once computers became fast enough and logic became tidy enough. They underestimated the mess. Human intelligence is not a clean hallway. It is a kitchen after a family argument: language, memory, instinct, culture, senses, guessing, and old leftovers everywhere.
Still, the dream kept returning. Perceptrons tried to imitate learning. Expert systems stored rules from specialists. Chess machines beat grandmasters. Search engines sorted the web. Recommendation systems learned our tastes well enough to become irritating. Then neural networks, fed by huge data sets and stronger chips, began recognizing speech, images, and patterns with startling skill. Newer language systems can draft poems, code, lesson plans, apologies, and suspiciously confident nonsense.
After thousands of years of making machines carry water, grind grain, pump mines, weave cloth, sell snacks, add numbers, and file census data, we finally made machines that can talk back. Naturally, one of the first things we asked them to do was write emails.
AI is not separate from the old machines. It is the same hunger wearing a new jacket. We wanted the wheel to spare our feet, the mill to spare our arms, the calculator to spare our arithmetic, and the computer to spare our clerical patience. Now we want AI to spare our attention, planning, and maybe part of our imagination. That is useful, but not harmless. When we automate a task, we change what we practice.
And yet I cannot pretend I am against the dream. I belong to the species that got tired of hauling buckets and invented pipes. Every machine says, in its own way, this was annoying, so we trapped the annoyance in metal, steam, wire, or code. AI is the newest trap. It catches patterns instead of mice, language instead of grain, decisions instead of falling water.
The story of AI is the story of humans discovering, again and again, that the world can be persuaded to work on our behalf. A gear turns, a valve opens, a card is punched, a circuit switches, a model predicts. The dream keeps changing shape, but the wish stays the same. Let the machine do the dull part, so maybe we can do something better with the time. Of course, whether we actually do something better is another question. History gives us the lever. It does not guarantee wisdom.
Comments
Add a comment or reply. New public comments are moderated before they appear to everyone.
Keep reading