[SystemSafety] Educating the Postmodern Systems Engineer
Steve Tockey
steve.tockey at construx.com
Wed Aug 14 21:45:31 CEST 2024
Les,
Good to hear from you, as always. I’m buried in some substantial customer work for the next couple of weeks, so not a lot of time for a detailed reply. However, here’s the top-level summary of where I would want to go:
First, I am pulling the “description” of Software Engineering from the soon-to-be-released version 4 of the “Guide to the Software Engineering Body of Knowledge” (aka “SWEBOK Guide” v4). The final publication version should be out in October or November (fingers crossed) but you can find the public review draft here:
https://waseda.app.box.com/s/r1j1mavf3glhtf6qh5j0xxdeb4uaws5h
Not a lot will change from this public review draft to the final publication version, mostly correcting some typographical errors and fixing some broken formatting. The content is substantially in place.
Second, if you look inside SWEBOK Guide v4 you will see knowledge area Chapter 14, “Software Engineering Professional Practice”. Specifically, section 1.2 starting on page 14.3 calls out "Codes of Ethics and Professional Conduct”. My point here is that while SWEBOK Guide v4 does not go anywhere near the level of detail that you go into below, it at least makes explicit that a true Software Engineer must behave in an ethical manner. Quoting page 14-4:
“Following their commitment to the health, safety, and welfare of the public, software engineers shall adhere to the ten principles according to IEEE Code of Ethics adopted by the IEEE Board of Directions, June 2020.”
Finally, on a slightly different but still related topic, I got my Master’s of Software Engineering from Seattle University. Seattle University is a Jesuit school. Every Jesuit school has a curriculum requirement that every degree program include an element of ethics. Not as an elective, as a core curriculum requirement.
I was in Palo Alto, CA several years ago listening to a radio talk show while commuting. One comment that stuck out was that if you look at corporate CEOs and executive managers, those that graduated from Jesuit universities are significantly less likely to get involved in corporate malfeasance such as Fraud, Embezzlement, Extortion, Insider Trading, etc. If I recall correctly the interviewee declared that no corporate executive that was ever convicted of corporate malfeasance in the USA in, say, the last 50-100 years was a graduate of a Jesuit university. So maybe explicit education in Ethics does make a difference?
All the best,
— steve
On Aug 13, 2024, at 5:02 PM, Les Chambers <les at chambers.com.au> wrote:
Steve
It's wonderful to see this list light up so passionately. Rumours of the death
of safety-critical systems debate were exaggerated.
Your detailed breakdown of what a software and systems engineer should know
resonated with me both for what it includes and excludes. It's correct but
incomplete. Looking back on 50 years in systems engineering I've concluded
that something was missing from my education. You know something is missing
when you're flummoxed - faced with a situation where you have no analytical
skills to guide your next move. For me, it was ethical dilemmas and people
problems. But this is a reductionist view, the bigger picture is: I was given
no grounding in philosophy. No one taught me how to live as an engineer. This
ignorance was annoying then but is deadly now as engineers face mind-bending
judgement calls such as, "Is this AI sentient . ergo it's in a safe state?"
I refuse to retire and go quietly so I've been reading philosophy and trying
to stay on top of developments in artificial intelligence (an impossible
task), specifically its application to safety critical systems - Tesla and the
like. It's an exciting but horrifying experience. Exciting due to the endless
utility of the six artificially intelligent agents I converse with every day
and horrifying when I witness a neural network deployed to drive an
automobile. It's doing my head in that a giant blob of parameters (a neural
network) has replaced a highly deterministic multilayer control systems
architecture the like of which we have been refining for the past 50 years.
This blob is not assembled subject to an unambiguous, complete and correct
requirement specification and can therefore not be validated with a suite of
human or machine-executable, module, unit, integration and systems tests. All
we have is actors such as Mira Murati (Chief Technical Officer of OpenAI)
shrugging, "Aw shucks, we don't understand why it works so well." Or gems from
the likes of tech dudes such as Elon Musk, "Full Self-Driving V12.5 is good to
go, it no longer spills my coffee."
Clearly, we are witnessing the death of determinism in control systems. It is
our sacred duty as professional engineers to turn this trend around "for the
good of mankind". This is not a fad. Neural nets are a postmodern fact of
life. You can take a Robotaxi in San Francisco, Los Angeles, Las Vegas, Austin
and Phoenix.
Where do we start? Just as neural nets deployed in safety-critical
applications need to be wrapped in a postmodern variant of systems engineering
discipline so do engineers need an upgrade to their educational wrapper - a
solid grounding in philosophy at the core of all undergraduate engineering
courses. And it must be COMPULSORY, not an elective.
Over the years, in an unexamined creeping manner, engineers have accumulated
massive power. We have designed nuclear bombs placing the world on a hair-
trigger for the insanity of unwinnable nuclear war. Military technologists
freely admit that the Ohio-class submarine is a greater threat to the planet
than a meteor. We build deep fake tools that can mess with the minds of 3
billion people overnight. Artificial general intelligence will put this power
on steroids, we therefore need to teach engineers how to wield it with the
wisdom wrapped in philosophy. My thesis is that wisdom can and must be taught
not left in the nebulous netherworld we call experience where the test comes
first and the lesson later. Some decisions must be right the first time,
there being no opportunity for continuous improvement in a dead radioactive
planet populated by 6 billion corpses.
Graduate engineers need to be celebrated not only for the equations they can
solve but also for who they are; educated individuals, possessed of open,
calm, self-controlled, stoic, stable, moral, measured, rational and logical
minds with an unshakeable commitment to the profession, virtuous, incapable of
committing an immoral act; not as culturally illiterate, laisser-faire,
technical automatons (a guaranteed outcome of your current curriculum Steve).
All of the above are learned behaviours that are reinforced by experience but
will struggle for an engineer's attention unless the frameworks and principles
are taught and valued at the engineering origin - the University.
Your summary curriculum is engineering hygiene, necessary but not sufficient.
We need to move our undergrads beyond algorithms to engage with the elements
of our humanity that non-engineers are merrily moving into silicon. The
squishy nondeterministic stuff that goes on in the brain.
The engineering education must instil a state of mind - a certainty in "who am
I?". Over the years we have evolved from spot problem solver, to systems
designer, to where we find ourselves today - Cognitive Systems Designer, a
professional specializing in the creation of intelligent systems that
replicate or enhance cognitive functions, such as perception, learning, and
problem-solving, by utilizing insights from philosophy, psychology,
neuroscience and artificial intelligence.
The problem is that much of the customer-facing AI product development work is
currently being led by non-engineers. Look up Dennis Hassabis, Sam Altman,
Jack Clark et al. they are neuroscientists, computer scientists and
philosophers. Brilliant people but lacking an engineering mindset, as
evidenced by safety as an afterthought in chatbot development. I would be
surprised if any of them has ever heard of the concept of Functional Safety.
My point is that the brain that works at the AI system product coalface must
have an engineering mindset with philosophical reasoning embedded. Designers
make decisions that can't or won't be reversed especially if they increase
shareholder value. Frances Haugen testified that Facebook knowingly left
algorithms injurious to user mental health in the mix to preserve cash flow. A
thought experiment: what if the designers just said, "No!" Where would you
find the courage to do such a thing?
And so we have come full circle to a place where ancient philosophy offers a
wealth of insights that can be practically useful in this cognitive state. My
humble suggestions are:
1. Socratic Method: This technique emphasizes dialogue and questioning to
stimulate critical thinking and illuminate ideas. Cognitive Systems Designers
can use this method to refine their ideas, evaluate assumptions, and enhance
problem-solving through collaborative discussions. Postmodern Systems
Engineering is a team sport that will welcome synthetic agents and assistants.
2. Aristotelian Virtue Ethics: Aristotle's focus on virtue ethics encourages
designers to consider the moral implications of their work. This perspective
fosters a commitment to creating systems that prioritize human well-being,
flourishing, and responsible use of technology. "For the benefit of mankind"
means, that when interests conflict, we report to mankind, not our political
or commercial masters. The courage to hold this line is a function of a clear
view of who you are.
3. Realism vs. Idealism: Philosophical debates on realism and idealism can
inform Cognitive Systems Designers about the nature of perception and reality.
Understanding how different perspectives affect cognition can help in
designing more intuitive systems. The naive engineer will be shocked to
discover that reality is largely a function of personal perception, with the
possible exception of gravity.
4. Teleology: The concept that everything has a purpose can guide designers in
understanding the intended functions of cognitive systems. By clarifying the
goals of their creations, designers can align technology with human cognitive
processes and needs.
5. Mind-Body Dualism (Descartes): Exploring the relationship between the mind
and body can inform the development of interfaces in cognitive systems that
account for the physical and psychological experiences of users, enhancing
user engagement and interaction. Research indicates that a component of our
wisdom resides in the body, independent of the mind. For example, the body
does not want to lie - evidence the success of polygraph technology.
6. Stoicism: This philosophy emphasizes resilience and rationality in the face
of challenges. For designers, it can inspire a focus on creating robust
systems that empower users to cope with uncertainties and enhance decision-
making. In bad situations, engineers need to stay calm and rational when
surrounded by panic. Stoicism provides simple tools to deal with high-stress
events. Marcus Aurelius instructs us, "It's not the event that sparks panic
it's how you react to it and this is under your control."
7. Phenomenology: The study of experience and consciousness can provide
valuable insights into user interaction with cognitive systems. Understanding
how people perceive and interpret their experiences can lead to more
empathetic and effective designs.
8. Pragmatism (William James): The pragmatic approach encourages designers to
consider the practical outcomes of their technologies. Focusing on real-world
applications and user impact can help create cognitive systems that are both
functional and meaningful. What was the designer of the B61 nuclear bomb
thinking when he provided for configurable yield (0.3 to 340 kilotons). "Aw
shucks, if we're mildly upset we'll kill a few thousand enemy but if we're
really angry we can smoke 2 million."
And so I beat on (with apologies to F Scott Fitzgerald), a boat against the
current, drawing the profession back into the past to assure its future. It
could be an act of stupid courage, attempting to nudge a local University in
this direction through my association with a colleague who lectures in their
postgraduate engineering stream.
Here's hoping that all of the above will attract some comment from this list.
In particular from the university educators amongst us. I read many American
universities are aggressively pursuing AI in education. Is anyone pursuing
philosophy for engineers?
And to the freshman engineer a warning, suit up and front up to Marcus
Aurelius, Epictetus, Seneca, Aristotle, Socrates . et al. Listen to what they
have to say, you don't want to find yourself in this situation:
A broken wing rocks on the sand
Beside a far-off sea
In pitch black faith was placed in men
Christ
One of them was me!
Cheers
Les
PS: A practical suggestion. Read Ryan Holiday, The Daily Stoic. It's not
rocket science, five minutes a day could transform you into a Philosopher
Engineer.
--
Les Chambers
les at chambers.com.au
https://www.chambers.com.au
https://www.systemsengineeringblog.com
+61 (0)412 648 992
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/pipermail/systemsafety/attachments/20240814/7cfbace0/attachment-0001.html>
More information about the systemsafety
mailing list