Artificial intelligence doesn’t need to be sophisticated to analyze customer behavior, make purchase recommendations, identify the best price, or even paint a Rembrandt. It might require a lot of data and time to set up, but it isn’t the complex sci-fi concept that you might imagine.
Article titles nowadays range from “Hands Down: Is AI Dangerous Or Not?” to “AI Will Add $15 Trillion To The World Economy By 2030.” It’s important to recognize that there are many misunderstandings surrounding AI and how it works. The name itself is misleading. The vast majority of AI doesn’t have actual intelligence. It simply identifies patterns. Per the Chinese Room Argument, there is no mind, understanding or consciousness.
Even more confusingly, AI has many different names: strong, weak, general, narrow, etc. On top of that, there are subcategories such as machine learning, deep learning and big data. For the sake of this argument, there are really only two types: weak AI, which identifies patterns in a single domain, and strong AI.
Here’s how I think about it: How many people actually know how the internet works? Even so, we all use it. When it comes to AI, we primarily use the “weak” type. It’s best in an environment with limited possibilities and set rules, like a game. (Speaking of which, humans are already unable to beat AI in games.)
In my work focusing on AI, among other emerging technologies, I’ve observed that this field is dominated by a few large companies; however, there are still ways for entrepreneurs and SMEs to compete. In fact, through my work at an AI competition platform, I’ve seen this innovation in action.
Subjective Vs. Objective Data
In the 1950s, single computers took up entire floors. Today, a smartphone — far more powerful than the computers back then — fits in your hand.
Another big change is the massive amount of data we have and constantly generate. When we moved from analog to digital cameras, taking a photo and sharing it became so easy that nothing seemed too trivial to snap. This has resulted in trillions of pictures’ worth of content — a win for AI that relies heavily on big amounts of data.
However, one problem with AI is that it can’t be truly objective unless the data it is fed is objective. Of course, the data is provided by humans. It’s not just a matter of objectivity, either. There might simply be a lack of understanding or an insufficient amount of data (assuming you even know whether the amount of data is sufficient or not). If I train AI to identify music and the only data I feed it is jazz, I am bound to run into unexpected situations with any other genre.
How can you promote objectivity when building an AI solution? The answer is simple: Feed AI the largest amount of relevant data you can, and be as objective as humanly possible. You need to step up your game. Use multivariant over A/B testing. Create a constant stream of relevant data (e.g. YouTube has more videos than it could possibly need). Data is never unbiased, so more is always better. To share an estimate: AI requires some 10,000-100,000 examples to recognize an “average” pattern. For “hard” problems that require anything on the scale of deep learning, some100,000-1,000,000 examples are required. A quadrillion would be better.
When building an AI team, you need someone who understands analytics (e.g. statistics and forecasting) and data (e.g. where it comes from, qualitative vs. quantitative) and someone who knows how to put the analytics into practice (e.g. identifies the problem and has a vision for the solution. A good example is Apple’s two-man dream team, Steve Wozniak and Steve Jobs.
The Challenge Of Data Monopolies
Big companies such as Google and Facebook have access to incredibly large amounts of data. When a system is fully digitized, it does not become harder to manage more customers, inquiries, demand or products — the system just gets better. Imagine how different Amazon would be if it weren’t for billions of people’s shopping preferences and search histories that are documented, categorized, sorted and analyzed.
This isn’t something small companies can easily pull off. The situation isn’t as grim as it may sound, though. The big guys aren’t only aiming for profit. There is a lot of freely accessible public data, code, templates, tools and much more out there. Computing power can be rented, and talent can be borrowed.
Cast aside expectation bias. You can build apps without any code (for example, with bubble.io). Presentation decks can be made in Canva. Get code on GitHub. Post your problem statement on data science competition platforms where thousands code away to claim the prize money. Make your own chatbots for free on Unibot. The key is to stay lean, start crude and take things from there. Get that MVP out.
The Startup Underdog
Everybody loves the underdog — the two-guys-in-a-garage unicorn. But how can they stand a chance against tech giants? How can your AI module beat one fed with billions of data points? The answer is steam. In the late 17th century, few people saw the steam engine coming.
We tend to learn how to modify existing algorithms instead of thinking for ourselves. Great things don’t happen in tiny little increments. They happen when someone thinks differently. If you’ve ever wished for something to be different, that’s your opening to create. That’s how one person can beat 100.