//Boeing’s 737 Next Generation airliners have been struck by a peculiar software flaw that blanks the airliners’ cockpit screens if pilots dare attempt a westwards landing at specific airports.// The Register, 8 Jan 2020

Reading the above news made me ponder on the topic of building software for critical systems and the result was this brief post-Overconfidence in one’s ability (software) is dangerous.

Writing software for operating aircraft is extremely complex, and, difficult to do. Those who do are brilliant engineers. At the same time, software engineers will do good to humanity if they remember that any software will (always) have bugs and they need to plan for contingencies when things go wrong or break.

In today’s AI systems where the models are a BlackBox, before (or while) acting on the recommendation from AI, a prompt for a human override may be a good idea, kind of a safety valve. Yes, Humans will bring in their biases and stupidity resulting in errors, but they will perform (hopefully) a lot better than not having humans (with control) in the loop.

It has taken millenniums for great minds like Confucius, Thiruvalluvar, Chanakya, Aristotle and others to observe, understand and guide humans on how to act on difficult times. So, a century or two will be a reasonable time for us to teach our computers on how to act under trying situations. By saying this, I am not underestimating the pace of technological inventions and innovations. I am just suggesting that we don’t need to rush at deploying computer systems before they are ready and testing thoroughly-let us not be rash! Technology’s role should be to improve peoples lives and save lives.

We should’ve listened to Arnold Schwarzenegger (“Terminator” movies) and learned our lesson :-)

Categorized in:

Tagged in: