We need to change the way we think about artificial intelligence. It is actually neither artificial nor intelligent in the way we normally consider these concepts. AI implies decisions made through computers programmed to make choices to act in a certain manner. But those computers are programmed simply to make yes/no decision — in an electronic circuit, it’s either yes or no, the circuit is on or its off. The major dynamic a work, though, is that the number of yes/no decisions is based upon a significantly larger set of information variables that have been programmed to be of some importance in the ultimate outcome.
This is really no different than what we consider intuition. Intuition is really the subconscious understanding of a large number of variables that we’ve internally recognized over time and play into a reaction to a given situation. The mother’s intuition that “that guy is probably not a good person for my daughter to date” is based on years of experience and judgments, relationships and observations, small cues and whispered observations that combine to help distinguish more than seems obvious on the surface. We marvel at some sports stars athletic ability, but miss the point they’ve learned how to see the big picture, where the ball is travelling, how the opponents are moving, understanding how to plant a foot, shift their weight, exert the specific muscle strength required to push up in the air, stretch an arm and fingertip to put their hand in a position to grab a ball that no normal human should be able to catch.
We input stock information and world situations, environmental histories and economic lessons into data points and feed millions and billions of bits of data into “artificial” and “intelligent” systems that can see patterns and decipher outlier analyses that lead to positive business returns better than normal human intuition could ever consider. Some consider this cheating the system. Perhaps so, perhaps not so much as just understanding better what drives the decisions in the first place and gathering all those mini-decision points together to get a billion yes/no answers faster than your brain could ever imagine and process them.
Does AI get it right all the time? No. Why? Because there was some factor that had not previously been considered that affects the outcome. So the next time, that factor is rolled into the equations and the next decision is a little more finely tuned. Will that decision always be right? No. Why? Because there is yet another decision that hadn’t been in the equation. So we roll that into the intelligent decision process. But is that really any different than non-artificial, organic intelligence (read: your brain)? Not really, because that’s what mom does when she knows you shouldn’t get too serious about that guy because he’s really a jerk even though he seems like the best catch in town.
After all, what is intelligence anyway. It is conceptualizing and describing relationships in ways that previously were not understood or explained. Did Einstein invent relativity? No, but he described it in a way previously not considered and it opened the door to further discovery and exploration. Did Newton invent gravity? I don’t think so, but he conceptualized and explained it in a manner that helped others move forward. Did Kepler create planetary movement? No, but he described it in ways that still explain the way we explore the universe. And inventors extrapolate what we know with other pieces of what we know to create entirely new ways to put objects and pieces of puzzles together today that allow us to improve human life and the way we do things everyday. Their observation and analysis of outlier activities are pulled into the mainstream of how and what we do every day, allowing us to reinvent the process of our processes . Is it intelligent? Sure. Is it different than using artificial, inorganic computational analysis to look for trends or outliers that open the door to new ways of doing business, change the way we consider the world, or help us make better decisions? Not so much.
So what does this all have to do with the federal acquisition process , the “tyranny of one size fits all”, and reinventing government? Every situation is different. Every situation is the same. In the end, you have to make a decision. Should the rules be set in stone. Yes. Should you be given the judgment and discretion to use your head. Yes. Should we be agile in the approach to developing new models. Yes. Should we understand and incorporate mandatory specifications that are unique to the situation? Absolutely, yes. Should we watch for bad behavior on the front end or reward good results on the backside. Yes. Should we develop “non-organic” intelligence to frame decisions? Yes Is there ever enough data to make the perfect decision? No. Do we start with the rules and guidelines that direct and discretion to bend them when necessary? Yes Do we continue to refine the process and the rules and the guidelines and gather more data and more intuition points and trust that the decisions will get better? Yes. Will we still make bad decisions? Yes Do we ever stop refining the computational processes?
AI is not artificial and it’s not intelligent. It is an extension of your mother’s intuition. Why? I don’t know. Think about it. Have some chicken soup and tell me about your day.