Date: April 30, 2019
Author: Bob Royce
Reading Time: 2 min 46 sec
TUG co-founder Bob Royce on the crucial role information architecture plays in the development of artificial intelligence.
Artificial Intelligence (AI) holds lots of promise as an innovative new technology…but only if done right. And it is Information Architecture (AI) that holds the keys.
Smart companies know you must align the gears of your technology to the gears of your operations to reap benefits. In the language of pace layers and complex systems, your company’s operations are a slower layer. They have the power to amplify faster, more innovative layers, or add incredible friction to the system.
Forcing your workforce to contort their actions to “benefit” from a new technology is not a good path to efficiency. Technology makes a great lever but a terrible taskmaster.
This is why smaller competitors and new companies have an advantage as disruptive technology emerges. If you are building the operational engine before accruing significant technical debt, you can tweak the fit between operations and technology as you go.
To succeed with technology in the long-term
you must align it to your operational needs,
not the other way around.
Going Beyond Data Science
The need to let operational concerns drive technology is especially true with the emerging field of (AI). But you might never know this by talking with many data science teams. There is a sense that what machines do when they learn is inscrutable (which is true) so it makes no sense to try and get them off to a good start (which is a myth we’ll bust below). Remember this early meme of the computer age: GIGO—garbage in —> garbage out? This is true of your data, but it is also true of your framing questions. Before you begin you should clearly establish what good means in your context.
To ensure your next AI pipeline is set up for success, take a step back and ask 3 basic questions:
What questions might the data answer?
What does a good answer look like?
How will I measure success?
After you ask and answer these questions, you’ll be in position to choose the right approach to solving them. You can also use this exercise to prioritize your efforts to maximize business value.
Let’s look at each question in more detail.
What questions might the data answer?
Practical AI is not about finding an insight looking for a problem, it’s about identifying problems that might be answered through intelligent inquiry. While it might seem trivial at first, experience has taught us that with AI projects, “well begun” is truly “half done.” It pays big dividends to begin by looking at the problem space from the perspective of every team with a stake in the outcomes. This will help identify both obvious and novel opportunities for benefit, but it should also help illuminate potential blockers to going forward—an answer’s benefit is limited by the ability to put that answer into action.
What does a good answer look like?
This is where the rubber meets the road. AI is great at classifying things if the basis for classification can be mathematically defined (such as image recognition), but if your goal is to classify the way a human would, or to support a human interaction, then you need a strategy for classifying what good means for the people you are serving. The machine will only learn what we teach it. It’s important to develop a strategy that accounts for all the different people we need to serve. If we teach the preferences of a homogenous group of people, we’ll build in bias toward that group and will fail to serve a diverse audience.
How will I measure success?
This is often a difficult question to answer definitively, but you can be sure of one thing: if it is hard to quantify a benefit, even one obviously significant, it is likely even harder to actually reap that benefit. On the flip side, when the ROI is easy to measure, it is easier to define and refine a technology strategy to reap the benefit (or at least you’ll understand the blockers).
Once you’re done with this, you can create a table like the one above that lists key questions, rates them by impact, and sorts them into 3 basic types of questions:
Known areas to improve (known known)—Amplifying humans
Known problems that require better definition (known unknown)—Targeted Learning
Things we do not yet know about (unknown unknowns)—Broad Discovery
Applying technology expertise
With the problem space defined, your data science team can now explore different ways to solve the problems. It is also a good time to take stock of your AI capabilities. By identifying the best strategy to answer each question, and triaging the risk, complexity, and effort, you now have the basis for prioritizing and budgeting the work ahead.
Artificial intelligence, in all its many forms, holds great promise for business. But we’re quickly moving through the phase of low-hanging fruit and into a phase where careful planning is required to truly benefit. Smart companies will put human and operational considerations at the center of those plans, and establish strong collaboration between their operations and data science team. At The Understanding Group, we partner with AI Companies to clarify complex problem spaces and define what good means.