History of AI

From iGeek
Since programming computers is error-ridden and time-consuming, we wondered if we could make them program themselves.
A brief history of AI might be as follows. Since programming computers is hugely time-consuming, we started wondering if we could make them mimic our ability to learn, and eventually program themselves. The answer is kinda. We mostly failed at that, but we have been able to teach them to recognize some patterns in datasets (Machine Learning). And a few together is useful.
ℹ️ Info          
~ Aristotle Sabouni
Created: 2022-07-17 

A brief history of AI might be as follows. Since programming computers is hugely time consuming, we started wondering if we could make them mimic our ability to learn.

  • In 1950, Alan Turing had coined the idea that if a computer could fool us into thinking it was another human, then wasn't that intelligence (Known as the Turing Test)?
  • This spawned a lot of philosophical debate into what intelligence really is, and whether machines could ever get it.
  • The term (Artificially Intelligence) was coined in 1956 (Dartmouth Workshop), which was a conference on how to make machines "think". [1]
    • Remember that many academic workshops are just a way to spend research money on a brainstorming conference, travel, and food, pondering futurism on the University's dime. And they accomplished? Nothing but a plan for more research, which yielded little tangible other than a few pipe-dreams about what the future might bring, and a framework for some terms (and study) that was later obsoleted. The technology and understanding just weren't mature enough to create anything more useful than cave dwellers' drawings pondering mechanical flight.
  • There were a few more Breakthru's on ideas of how it might happen over the next couple of decades (and a lot wouldn't work), with fad cycles of money being poured into some potential "Breakthru" (AI "Springs": in that research money rained down), and that research didn't materialize anything of value and then the funding dried up (AI "Winters" like 1974-1980 and 1987-1993).
  • As far as tangible accomplishments? 1956-1993 was the AI stone age, where a few nescient ideas and terms leaked out, but no real problems were getting solved, so there was no real market for it.
  • In 1997, IBM's "Deep Blue" (Super Computer) became the first computer to beat a chess champion when it defeated Russian grandmaster Garry Kasparov. But that wasn't really "learning"... it was taught all the possibilities in chess, and in a very specific game, one could win.
    • A simpler but more thinking man's game is Go, and we don't have a computer that can beat the best Go players... yet. We probably will, but it's not like it's really thinking, as much as it's just solving a problem that we programmed it to.
  • In 2011, IBM's Watson won the TV Quiz Show Jeopardy by beating reigning champions, Brad Rutter and Ken Jennings. Again, a huge database and being better on Trivia isn't quite "thinking", but that kind of problem-solving can be useful.
  • In 2014, Eugene Goostman was a chatbot that was able to fool a few judges into thinking it was a human (beating the Turing test). [2]
    • All of those were highly constrained games, where with enough resources you could program a computer to solve a problem, guess at the question (and answers), and convince people that you were a non-English speaking teenager. Knowing based on probability and large data sets which term/concept has the most matches (or how everyone that won a game of chess reacted when in the same piece configuration), or how to evade questions isn't really intelligence.

GeekPirate.small.png



🔗 More

History
Tales on the parts of history that have been ignored, suppressed, or lied about.

Artificial Intelligence
Computers aren't "intelligent". But you can program them to learn patterns from data, in pretty limited ways.


🔗 Links

Tags: History  AI


Cookies help us deliver our services. By using our services, you agree to our use of cookies.