Will the Real AI Please Stand Up?

Make sure that your use of machine learning has more substance than hype. Here’s a framework you can use to help you cut through the noise.

Guest Commentary, Guest Commentary

May 20, 2019

5 Min Read

If you’re tuned into the innovation scene right now as an investor, journalist, or the average Jane, you may be wondering if we’re at the precipice of the most exciting time ever to be alive… like ever. I mean, if unbridled excitement of pitch decks and media coverage are to be believed, artificial intelligence (AI) and, more specifically, machine learning (ML), promises to be bigger than jet packs, crypto, and CBD oil all rolled up into one. Algorithms will soon(ish) be able to drive my car, beef up my cybersecurity, AND make me a more perfect chicken pot pie.

As investors who see a high volume of new, early-stage companies, we needed to create a framework for quickly understanding:

  • When are we seeing an actual application of ML?

  • Does the application of ML appreciably improve the product?

  • Will this product make it to market in a realistic timeframe for a venture investment?

So, I created a “Real AI Rubric” to cut through the noise and to pressure test the tech and ideas I'm seeing every day. With this, we’re able to get into a team’s thought process, learn how deep it is, and, in some cases, separate the tech from the marketing claims.

How does your solution improve as you have access to more/different data?

In most cases, ML-driven solutions improve with access to the right data and lots of it. There are several forms of training and models, but at the end of the day, ML solutions find hidden patterns in data and use those patterns to predict answers to subsequent queries. In fact, a very substantial amount of time of “AI work” goes into choosing and cleaning data sources that you’ll use to train your ML model. And even more of that data is used in the testing and tuning of your model. You get the idea.

Technical teams using ML should be able to clearly articulate how access to new and/or improved data sets can get them to better results faster versus if they’d taken a more traditional programming approach of capturing an algorithm in code and then refining that code to get to achieve the same goal -- better results faster.

How is your algorithm different than a linear regression?

Just a few years back, big data was the AI of the moment. Companies were spelunking through tons of structured and unstructured data to come to smarter conclusions using linear regression. A very common approach was (and still is) creating a simple linear regression of numerical data and then using this to predict outcomes. Linear regressions can be powerful and can absolutely be the right application of mathematics in a problem or product. However, it is not ML, nor is it any other form of AI and it shouldn’t be positioned as such. Knowing that a team knows the difference is as important as the difference itself.

What part of your solution could benefit from the use of GPUs, TPUs, or other accelerators?

One reason that Nvidia stock went on such a tear is that the same primitives used in graphics directly apply to a lot of heavily mathematical algorithms. They’re certainly used for statistics acceleration, but additionally, almost all of the ML frameworks have been modified to take advantage of this hardware. The point of this question isn’t just to see how well they understand the role of AI-focused hardware but to see how they’re thinking about the performance of their solution as more and more users and more and more data arrive.

Which open source packages are you using most heavily?

Not surprisingly, open source rules the roost when it comes to modern AI programming, and certain languages and frameworks like Tensorflow, Pytorch, Caffe2 have grown to be the norm. As with so many developer-focused debates, different camps arise with differing reasons for their choice -- and almost always with passion. There are plenty of comparisons available. What’s important here to investors evaluating your pitch isn’t that you’re using one framework over another but that you investigated the options and can thoughtfully explain what you’ve gone with and why.

What’s more, many of these frameworks are now hosted offerings in one of the public clouds and that leads me to my next question:

Are you using AI cloud services or cloud models and if not, why not?

There’s a substantial amount of AI work that goes on directly in the above frameworks, but increasingly the public clouds can hide these details from users and allow them to use a fully hosted API to get an answer. Why set up your own machine learning infrastructure to train and use image recognition models when Google’s Vision APIs are enough? Why work on your own Natural Language Processing (NLP) models if Amazon’s Comprehend can do the job? There are certainly times and reasons when building your own implementations make sense, and I like to hear teams discuss this precise tradeoff.

Which is more of a limiter to your solution? Computation time for training, or computation time for inference?

If the machine learning in an early stage company is more marketing than tech, the teams won’t make it past the first five questions. In cases where a team has convinced me of the basics, we’ll go deeper into their understanding of things like the limitations of each of the main phases of deep learning: Training as you build a model and inference when you use that model. Like several of the questions above, there’s no right answer here as it’s a function of the application. It’s a test of technical chops as much as it is of critical thinking.

If you’re employing -- or planning to employ -- machine learning in a new commercial solution that has the potential to revolutionize a product, process, or even an entire sector, the bottom line is, make sure that your use of ML is more substance than hype.

Steve Herrod, Ph.D., is a managing director at General Catalyst, a venture capital firm with approximately $5 billion in capital raised. At GC, Herrod focuses on investments in security, infrastructure, and SaaS technologies including Datto, Illumio, Menlo Security, and Contrast Security. Prior to joining General Catalyst, he was chief technology officer and SVP of R&D at VMware. Herrod earned a Ph.D. in Computer Science from Stanford University and a BA from the University of Texas at Austin.

 

About the Author

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights