James Cameron Was Right. This Dirty Little Secret Will Bring Machines to Life.

Share on Facebook

James Cameron Was Right. This Dirty Little Secret Will Bring Machines to Life.






They see you when you’re sleeping, they know when you’re awake, they know if you’ve been bad or good… no, they’re not Santa and his little helpers — they’re algorithms, strong unseen forces that shape our everyday lives.

When you think of algorithms, what comes to mind? If you’re like most, images of mathematicians, scientists and physicists pop in your head, scribbling formulas on whiteboards or mixing chemicals in laboratories. While that stereotype isn’t without some truth, you interact with algorithms more than you know. Your Facebook feed? Algorithms figure out what posts you see. Advertising on a website? Algorithms find your interests and serve the best ones to catch your eye. Intrigued by recommendations on Amazon.com and Netflix? Yup, algorithms came up with those too.

They extend into every part of your life. Even if you’re an Internet Luddite who uses a landline instead of e-mail, you still interact with them. Televisions, microwave ovens, blenders, elevators, ATMs, living room lights, home security systems, even your car: if there’s a device that adjusts its behavior based on your actions, chances are there’s one doing work.

But since they’re abstract and unseen, you often forget how omnipresent they are. They hum along quietly, gathering data and executing instructions, all to make your life just a bit easier. The invisible world of algorithms is teeming with activity, and they’re learning and evolving. But unfortunately, like all things, too much of a good thing is dangerous, and they’ve bubbled up to disrupt our everyday lives, creating earthquakes of their own.

In simplest terms, an algorithm is a step-by-step process of calculation. In layman’s terms, they’re mathematical “recipes” used to automate decision-making. So when you leave the room, you don’t need to tell the lights to turn off — when sensors realize no one’s in the room, they tell the rest of the machine to dim the lights. And when you return, it turns them back on. Simple.

The word “algorithm” came from seminal Persian mathematician Al-Khawarizmi, whose Latinized name was “Algoritmi.” His work led to the field of algebra, and among other achievements, he introduced the West to Arabic numerals. The Babylonians invented the earliest known algorithms, sometime before 1600 B.C., but it was Greek mathematician Euclid that created the first famous one, which calculated the highest common divisor of two integers. Since then, they’ve been vehicles to store knowledge — and as we’ve evolved, so have they. Clocks, catapults, computers — if you can write a recipe to design it, there’s one to run it. And with the growth of the Internet, they’re everywhere, covering nearly every field and subject.

As a result, algorithms are calculating problems before thought impossible to tackle, even matters of taste and emotion. Researchers at Tottori University in Japan, for example, created programs that predicted hit movies, including the Da Vinci Code, Spider-Man 3 and Avatar, by plugging in variables from advertising and word-of-mouth. They later compared the outputs to actual revenue generated.

“They appeared to match very well, meaning the calculations could provide a fairly good prediction of how successful a movie could be even before it is released,” Tottori researchers said in a statement. Lead researcher Akira Ishii added, “I think our model is very general. It will work in other countries as well.”

Dating apps, too, are playing matchmaker to find the person you’ll light a spark with. There’s even a set of formulas that can judge humor. The University of California at Berkeley created a “collaborative” filtering algorithm, called “Eigentaste,” that powers joke recommendation site Jester. By rating jokes you like, the site can tell what makes you laugh and recommend others that cater to your taste.

Algorithms can also police the world for dangers. Those surveillance traffic cameras mounted on traffic posts? They can predict whether you’ll drive through that red light. No need for human review.

They also scan e-mail and social media for national security threats and industrial espionage. The Air Force Institute of Technology, for example, created software that spots threats, using an algorithm known as Probabilistic Latent Semantic Indexing, or PLSI, that calculates an employee’s interests and interactions from e-mail to sniff out insiders who may be breaking the rules.

More benignly, emergency response teams use specialized formulas to model and manage natural disasters, helping decide the best course of action for events like wildfires, floods and epidemics. IBM researchers, for example, use advanced math techniques, called Stochastic programming, to model and solve problems during a crisis, such as how best to deliver supplies.

Others can use these same programs to detect fraud in insurance claims or schedule production tasks and make manufacturing plants more efficient.

In the age of the Internet, lightning-fast computers and big data, algorithms are about to take a leap ahead. There’s enough storage and memory to hold and process more information, and programmers are creating software that self-learns. These strands of code change themselves as they weave more data into its memory, evolving very much like a living organism.

Self-evolving algorithms promise to make our lives easier, breaking down complex, sometimes overwhelming, problems — and even making decisions for us. But the idea of automating decision-making can sound alluring, especially in an increasingly fast-paced world. Many critics, like MIT computer science professor Kevin Slavin, see the dangers for an algorithm-driven world. Self-learning models, he argued in Ted Talks, can evolve quickly — perhaps even faster than the human mind can comprehend, altering themselves to the point of incomprehensibility.

In other words: Judgment Day. Like the “Terminator” film franchise ominously warns, out-of-control software that begins to alter its own code in ways its human makers don’t understand is the recipe for trouble.

In the Skynet scenario, machines become self-aware and rise up against their human creators. But it isn’t just a scene from science fiction — it actually happened, and created the greatest intra-day decline in the history of the Dow Jones. During the so-called “Flash Crash” of 2010, the Dow plunged about 1,000 points in minutes, only to recover quickly — the result of a rogue strand.

Computer programs, which execute up to 65 percent of trades on Wall Street, calculate which stocks and bonds to buy and sell. Algo-trading software, used by investment firms, brokerages, pension funds and other financial players, pulls market conditions, among other factors, into pre-programmed instructions to execute orders to buy or sell, often without any human intervention. It’s often incredibly fast and affects a large part of the liquidity in the stock market.

Computer scientist Josh Levine created one of the first computer trading platforms, designed to open up and demystify the market for regular investors. But major players in the financial industry used these programs to execute trades at higher and faster volumes, hoping to gain an edge on competitors. Investment banks, hedge funds, and venture capitalists bid to hire the brightest computer scientists to create the fastest and smartest trading programs, aiming to execute a transaction a mere millisecond faster than rivals, which often determines winners and losers.

During the Flash Crash, one of these programs sold 75,000 contracts with a total value of $4 billion in 20 minutes, according to the New York Times, causing other trading algorithms to follow suit.

Those few minutes wreaked havoc and panic, until the market realized it was a glitch and corrected itself minutes later. In the wake of the flurry, the Securities and Exchange Commission launched an investigation to look into how algo-trading affected market volatility, as well as what “circuit breakers” could be put in place to let halt machines if, and when, they run amok again. Despite the probe, results came up inconclusive. No one knows exactly what happened to set them haywire — or how they corrected themselves.

“People came into the market and began to develop these high-speed computers,” Sen. Ted Kaufman (D., Del.), told the Wall Street Journal. “Human beings were no longer doing the trading, computers were. They developed these algorithms. It ran automatically. It grew and grew. There is no way to know what is going on. No one knows what is happening in these exchanges when this trading is going on. We have a very dangerous situation.”

Why use algorithms at all? Well, there’s a fundamental limit to human decision-making — the point where our brains can’t physically process information any faster. It’s about a second. If that sounds quick, consider this: machines today can execute a billion commands in the same timeframe. The realm of the algorithm ecosystem is only dependent upon the speed of light. To advance the human race, we’ll have to overcome our biological limitations. And that’s where algorithms take over.

As they infiltrate and begin to automate all aspects of our lives, we need to look at the extent these programs influence and become aware of the data we feed them. They run so we have an easier time making complex decisions and they automate tasks and calculations that often overwhelm us. They play a powerful role in helping us make smarter, faster decisions, even in the most perilous of situations.

But we need to be careful of automating too much, especially as they guide our perceptions on what we like, who we should connect with and what we should buy — they’re influencing our decisions and shaping our world. Are you in the driver’s seat? Or is a mathematical formula? Maybe only the algorithm knows.


Like 2machines on Facebook






I Want More Stuff Like This!


We're on a mission to show you why technology matters.
Sign up to our daily e-mail and see for yourself!


Like Us on Facebook?


Published In:




Beyond Technology

Exploring the human and social side of the digital revolution, and how everyday people use technology in new and extraordinary ways.
He Feels Rich, Even As He Pays Off Thousands of Dollars of Debt. There’s a Surprising Reason Why.



You Might Also Like:




He Feels Rich, Even As He Pays Off Thousands of Dollars of Debt. There’s a Surprising Reason Why.



Why Would Someone On Food Stamps Have an IPhone?



Why Millennials Like Me Are Doomed to Be Unhappy.



Hey, Parents. What That IPad Is Doing to Your Kid Is Kind of Shocking.



Technology Kills a Lot of Industries, So Why Won’t the Fax Machine Just Die Already?