3 Lessons On AI
Most people simply don’t understand AI. When I say ‘most people,’ I mean most people who are working with these technologies. And when I say ‘don’t understand,’,’ I don’t mean the fancy math that underpins neural networks; I mean they fail to grasp the context in which AI sits. There are three key things which I believe anyone who is working with AI needs to understand and I’ll explain each of them below:
YOU CAN’T TREAT EMERGING TECHNOLOGIES THE SAME WAY AS ENTERPRISE TECHNOLOGIES:
One of the first things I always do in conversations about AI is to understand the maturity of the innovation capability. It goes without saying that if the innovation capability is merely a corporate ‘shopping’ department and purely focussed on tech transfer, then the ability of the organisation to truly embed the capability internally is going to be severely hampered. Equally, if the innovation lab is merely the ‘creche’ for techies – somewhere for them to go in their lunch-breaks to experiment with gadgets, then the organisation is going to be hamstrung in delivering any real value from these investments.
Equally misguided are where organisations treat their innovation function like an extension to the Change team. Change plays an important role in any firm. The principles of great project management are thankfully now well understood, and a class of professional change management experts exists in every developed market to help deliver projects on time, to scope, and within budget. Whether Agile or Waterfall, governance and methodology are well honed to limit surprises. At the same time, however – the focus on delivery gets in the way of discovering the value of the experiment. The same tension plays out in classrooms up and down the country. Do you teach kids to learn, or do you train them to pass exams? The pragmatic answer is you need to do both, and the same is true for innovation too.
Robotic Process Automation technologies are a great example of platforms that have reached the level of maturity that they can be successfully implemented by adopting traditional project management approaches
Innovators like me are focussed on failure. Now, this might seem like a strange thing to admit – but the fact that so few of us are bold enough to admit this is a symptom of the problem. What is needed is the creation of a mechanism that harnesses the drive for delivery of change professionals, at the same time as helping innovators stay fresh and learn from their experiments. This is what I mean by a mature innovation capability; one where a positive outcome for the organisation is targeted, but where the value is measured by the quality, throughput and size of the portfolio – rather than simply the output.
Robotic Process Automation technologies are a great example of platforms that have reached the level of maturity that they can be successfully implemented by adopting traditional project management approaches. AI, however, is still an emerging technology – a bag of bolts, to get success from it – you need an innovation team that can support its development.
UNLESS YOU GO ‘ALL-IN’ WITH MICROSERVICES AND YOUR API ECOLOGY, YOUR AI WILL ONLY EVER DELIVER INCREMENTAL IMPROVEMENT TO YOUR BUSINESS:
People talk of AI being a ‘big-bet’. My father was a compulsive gambler. The one thing he taught me from those years watching the horse or soccer results were betting is not a system that is likely to create success. Sure, you might win now and then, but on the whole, your confidence will atrophy into delusion. Betting isn’t a wise career choice. See point 1 – if you have a mature innovation capability, your investment in emerging technologies such as AI will have much more predictable outcomes and feel much less like betting. I also recoil at the ‘big.’ Stories from Silicon Valley of overnight success seem to have polluted the minds of corporate leaders such that they believe if they drink enough the AI Kool- Aid, they’ll strike gold The truth is that 99 percent of AI initiatives will only ever deliver incremental benefit to the organisation. One percent faster, $0.10 cheaper per process, or 10% less risk. That’s OK. A healthy innovation lifecycle management and a comprehensive opportunity assessment across the organisation will likely only ever achieve this.
The thing to realise about hitting it big is that the highly disruptive innovation to your business or your industry is going to come from not just doing what you do today with added AI in the mix, but by repackaging aspects of what you do today, likely with those things done by other people, in order to offer a compelling solution that wins big in the market.
Let’s look at Uber by way of example. They are the world leader in traffic routing: their AI technologies have the most data and the availability of compute on their customers’ handsets given them the most scaled compute resource on the market. Their rise to success was nothing to do though with either of these factors, it was down to their ability to package existing technology features: GPS enabled smartphones, and mobile payment technology; together with offline capacity: a network of drivers with time on their hands. This repackaging was the secret to their success.
The lesson here for those of us with incumbent organisations looking for a potential transformative opportunity from AI is to create the conditions to enable rapid repacking of systems and processes to take advantage of where AI can slot in and make the difference. You don’t see blockbuster conferences like the equivalent of CogX for Microservices or APIs, but the truth is – these technologies are the much ‘bigger-bets’ than is AI. Without awesome technology to ensure the seamless connection between end customers, third parties and your existing processes and systems – you can build as many ‘killer’ apps as you like, but they just won’t work. Microservices are the cure to Arnie’s arthritis.
YOUR TECHIES DON’T HAVE THE ANSWER TO AI ETHICS.
So, you’ve nailed the innovation capability and conducted a wholesale opportunity assessment across your firm. You’ve got a portfolio of initiatives that are delivering incremental value to the organisation, and you’ve even been successful at repackaging third-party capability with your own to deliver the big disruptive stuff. Well done, hero. You’ve done well.
If, like Mr. Zuckerberg, you’ve made it big without a thought on ethics – then shame on you, you’ve played the system – but good luck sleeping at night.
I feel that it’s only a matter of time before there is the ‘extinction rebellion’ equivalent to the technology industry. Too many companies are looking at emerging technology as being the silver bullet, and they’ll worry about the fluffy stuff later.
We all have a responsibility to each other and society. If we fail on this, we fail to be human.
Ethics is different from compliance. It’s not because Facebook played fast and loose with regulators that they face potentially irrevocable reputational damage, it’s because they failed to have a conversation with their stakeholders about what they were trying to do.
Ethics is also not the same thing as safety. I’m tired of all the conversations right now about AI ethics, which only seem to revolve around explainability (XAI) or overcoming bias. These are technical considerations, the difference here is like the difference between engineering tolerances in firing chamber design, and whether to give kids bulletproof backpacks for school in a conversation about the ethics of firearms.
Also, ethics shouldn’t be a thing that’s decided by a group of ‘experts’ meeting behind closed doors. To do so is to not understand the problem. Many major tech firms have such ‘ethics boards,’ but how many of them have their decisions, reasoning, or even their composition open to public scrutiny? The problem is similar to the tobacco industry in the 1950s, we know that emerging tech has a long term impact on the health of society, but how many of us are willing to acknowledge this – and even slow down its deployment ahead of the pursuit of profit or capital creation?
The opportunity is there for people and organisations to get Digital Ethics right, to build this into a pillar of capability right from the outset. My belief is in the long-run, this strategy will pay off. Only time will tell.