Additional notes inspired by Rob May’s article here.

Rob and I go way back and this is the type of thing we talk about over beers. I feel obliged to add some notes of my own. I wish I vehemently disagreed on something. That would perhaps make for a more interesting post.

I will also give the caveat I give in talks at AI conferences — I don’t claim to be an expert on AI trying to innovate. I’m an expert on innovation trying to work on AI.

Here are 5 key, interesting things to discuss around AI:

  1. AI innovation is limited

Innovation is one of the things I’ve spent time studying and trying to intentionally create across many products and technologies. The media likes to create clickable news across a wide array of topics (I suppose all topics).

So, when I talk to experts (no matter the expertise) they have some level of eye-rolling concerning the media. Nuance gets oversimplified.

There’s a lot more to come: “AI fields like symbolic logic, evolutionary algorithms, and others have hardly been touched, and even for neural nets much of the work has been researchy, and is difficult to translate into applications.” (Inside AI)

As with most technologies, I advise people to lower expectations on what can be accomplished in 2 years and raise expectations on what they think can be expected in 10 years. The reason this phrase is often true is that humans are imagining improvement as linear when it’s actually exponential. And, this is exactly what it looks like when you map a linear improvement to an exponential curve. This of course assumes we’ll continue an exponential pace.

I advise people to lower expectations on what can be accomplished in 2 years and raise expectations on what they think can be expected in 10 years

2. AI hardware is less talked about but important

I attended Mythic’s B round celebration. And, I attend events where we talk about quantum computing. This will be a fun one to watch from the sidelines and making investments. It will be a massive revolution. Not much more for me to contribute here. I’m learning and watching. I recommend others learn and watch as well. This is a great area.

3. “AI is X”

I was one of the people propagating the imperfect “Data is the new oil” metaphor years ago. I spoke about it at conferences and followed up with Medium post. My original pitch was that if you look at the very largest companies, we’ve moved from oil-based companies to data-driven companies as the leaders in creating new value. Data replaced oil as the driver of the value.

Also, oil comes from raw materials that need to be refined in order to be useful. Having done early big data, then AI projects at the corporate level, I noticed that’s where corporations get stuck. They are sitting on lots of data, but it needs to be refined. There are other areas (tangibility and fungibility for example) where the metaphor doesn’t work. So, I wouldn’t read too much into it beyond a simple story I tell to help executives understand what they are up against. It ties in well with our conversations about how all companies are becoming software companies going back 20 years.

“AI is the new electricity” falls into this category. I just interpret it as saying it will be ubiquitous and also enable new capabilities. Electricity did those things. As a business/ market study, it probably won’t work.

The interesting thing about the “AI is electricity” metaphor is that AI will be ubiquitous. In 10 years AI will probably morph into just being a category of “software development”. It will be as ubiquitous as microchips. Notice that no startups pitch themselves as “we run on microchips” today. It’s ubiquitous and obvious. The tech will be readily available.

“AI is the new electricity” falls into this category. I just interpret it as saying it will be ubiquitous and also enable new capabilities. Electricity did those things. As a business/ market study, it probably won’t work.

What this really comes down to is that there may not be any good metaphors for what’s happening. Technological civilizations only make all their software intelligent once, and it doesn’t look exactly like anything else before or after. That’s why AI is interesting. The arc of technology will happen once everything will be different. That’s what makes it interesting.

What this really comes down to is that there may not be any good metaphors for what’s happening. Technological civilizations only make all their software intelligent once, and it doesn’t look exactly like anything else before or after.

4. New workflows require a culture change

This is where innovation gets slowed down. The technology pioneers have a vision of where they want to go. And, culture changes much more slowly.

A really intelligent AI developer who was doing amazing things called me last week to chat about AI use cases. There are tons of developers ready to work on things. And in Silicon Valley, many use cases (the blatantly obvious use cases get more investment and therefore more technical attention). He wanted to know some ideas about how to think about opportunities. I told him to look beyond the technological trends and also the societal/cultural.

Bill Gross mentioned that timing is the number one issue causing the success or failure across IdeaLab companies. Timing is a mix of “can you build it” and cultural timing. When you get to the point where you know you can build it, the timing issue comes down to society and cultural timing. I think that internal corporate cultures are like overall social culture changes. They go in waves, and they impact timing.

I’ve created software products since I was 12. I’ve done it professionally for 20 years. In those 20 years, I’ve probably launched 30 products. The vast majority actually were correct on the idea. But, none had perfect timing (too early by 5 to 10 years). So, I actually worry less about having the right idea now and worry more about whether I’m inputting it into society and corporate cultures at the right time. If it takes 6 years longer (as often happens) it might not fit the venture model. (I have an alternative model idea, but that’s another discussion.)

The workflow changes I see are around workflows for knowledge management and workflows around augmenting the work product itself using pattern recognition. Both types improve the work product and effectiveness of the work so much that there’s no stopping them.

Now, we’re just down to timing. Once we get them working, it still comes down to timing based on when society and companies will accept them — and acceptance means people will need to change as well.

The big opportunities don’t just change tech around how humans live. The big opportunities require human behavior change as well. Which we need to be aware of, but it’s difficult to predict.

The big opportunities don’t just change tech around how humans live. The big opportunities require human behavior change as well.

5. Bias and optimization dangers will happen way sooner than AI sentience issues

The question that people (and the media) like to talk about is AIs getting so smart they take over. I love to chat about it. The bigger problem that will actually happen sooner is the optimization problem.

The “optimization problem” I would define (as Rob gives an example) as the problem caused when we create systems that optimize at the macro, not at the micro (just you) level that we’re used to. A lot of AI systems will get their sustainable value from macro decision making (you become an input and also benefit from the output of the system). The problem is created when you don’t understand how you got “optimized” vs the collective. When are you getting optimized for your personal benefit vs. being optimized for the collective benefit?

When are you getting optimized for your personal benefit vs. being optimized for the collective benefit?

There’s an inverse correlation between AI talk and AI development. I talk to a lot of really smart people in Silicon Valley daily. The closer to the code people are less concerned about the dangers of sentience and more about bias. I want to add optimization to that list as we build systems that optimize across the population. Bias and optimization problems are here now.