Machine Learning

Today, the field of AI is advancing at an exponential rate. We are now at the pinnacle of the growth of AI, partly because the technology and infrastructure is able to support such highly functional AI, but mostly because of the development of machine learning (ML). ML is the explanation behind the current boom in AI and for why speculation of AI’s potential is seemingly limitless. So what exactly is machine learning? Many believe AI and ML to be synonyms; however, the two are actually quite different. AI is defined as an agent that can mimic human behavior, and ML is the process of training these AI agents to become “smarter.” Machine learning involves training the AI agents through repeated trials. For example, feeding an agent thousands of labeled images (this is a tree, this is a cat), allows the algorithm to slowly generate common characteristics between similarly labeled images. Over time, this allows highly trained AI to be capable of distinguishing a dog from a muffin (Courtesy of Google), or a twin from his/her other half.

Weak AI and Strong AI

Currently, most AI is capable of performing just one task very well. These AI agents are called weak AI. IBM Watson is really good at winning Jeopardy games, but it’s limited by the skill sets needed to be really good at Jeopardy. The same applies to Siri and Google Assistant: they are good at recognizing voices and returning relevant data, but these assistants don’t have the ability to do anything they are not explicitly programmed to do.

Strong AI is what we see in science fiction. It’s an agent with general intelligence that can act and perform to the same extent a human being can. Strong AI agents are able to adapt to their situation, and do pretty much anything it sets its mind to. As of now, we are still far from this level of artificial intelligence, although extensive research is being conducted with the ultimate goal of developing strong AI with general intelligence.

AlphaGo: doing the impossible

Deep Blue had conquered chess, and Watson had conquered Jeopardy — but there was one game that was thought to be impossible for an algorithm to figure out: Go. In October 2015, Google unveiled AlphaGo, a revolutionary project that did the impossible blew minds and did the impossible. While a chess game is played on an 8x8 board, Go is played on a 361x360 board, with hundreds of thousands of possible moves per board position. AlphaGo was able to consistently defeat the world's best Go champions (Courtesy of AOL), making it the best Go player in history.

It was built on the belief that humans play games like Go based on intuition, a feeling generated by underlying mechanisms of the human mind. To model the mind, AlphaGo was built using neural networks, an emerging development in AI that combines multiple machine learning algorithms and leverages shared information between them. These neural networks are modeled after the neural networks within animal brains.

AlphaGo was so revolutionary because it learned the game of Go from nothing. From simply playing games over and over again, the AI was able to train itself from lines of code that didn’t even know what Go was, to the best player in the world. In fact, Google “had it play against different versions of itself thousands of times, each time learning from its mistakes and incrementally improving until it became immensely strong, through a process known as reinforcement learning.” (“AlphaGo Zero: Learning from Scratch.”)

AlphaGo’s success spurred on the realization that extremely intelligent AI would be those that can learn on its own, and those that learn in the same way humans do. Whereas humans have natural barriers like sleep, age, and fatigue, an AI that can think like a human can then perform a certain task 24/7, millions of times until it becomes the best.

Applications to AR

Courtesy of MagicLeap

Augmented reality (AR) entails adding a digital layer on top of our current world. It allows us to visualize digital objects as part of our real world, thereby serving as a bridge between technology and real life. For now, that’s mostly seen in fun Snapchat face filters, or in games like Pokemon Go. But this technology is being developed for use cases outside just entertainment. Imagine having an arrow on the floor in front of you telling where to walk (Courtesy of Pointr) when you’re trying to find the closest Starbucks. AR is already being implemented in industries like health care; for example, an app is available that help model the eyeball and particular diseases that may come with it. A future with AR enabled eyeglasses is nearing, and in such a world, everyone’s lives will be immersed in AR and a combination of the digital and the physical will be actualized.

Computer vision, a subfield of AI, plays a large role in making AR effective. Without AI being able to recognize our faces, Snapchat filters wouldn’t be possible. Think about the maps example: computer vision is used to recognize roads, and place directional arrows on those roads.

Autonomous Driving

Courtesy of Google Waymo

The self-driving car is commonly heralded as one of the most disruptive innovations that will occur in the near future. What most people don’t realize, is that self-driving cars are already among us. Uber has been completing rides with self driving cars in Pittsburgh for around 2 years now. Alphabet just launched Waymo One, an Uber-like platform that lets people hail rides from their fleet of self-driving taxis. A world with autonomous vehicles implies a world where transportation becomes cheap and accessible, but also one that is increasingly reliant on technology. AI compiles the vast amount of information gathered from various sensors on autonomous vehicles and turns it into insight, quickly and perpetually.

Last:

Origins 🕑

Next up:

Future 🔮