[SystemSafety] AI and the virtuous test Oracle - we need to talk about Frank
Les Chambers
les at chambers.com.au
Mon Jul 3 10:06:25 CEST 2023
[Bertrand RICQUE] If intelligence is the ability to solve complex problems,
what is a complex problem?
Bertrand
I agree. Max Tegmark's definition of intelligence is a tad abstract. Hardly a
useful input to a
comply/non-comply decision. But cut him some slack. He is a physicist and
cosmologist. His doctoral research focused on theoretical physics, specializing
in precision measurements of cosmic microwave background radiation. I doubt if
he's ever had to build safety into a system or certify that a system is safe as
reasonably practicable.
This provides further evidence in support of my assertion that engineers are on
their own. We, and we alone, will have to solve this definition-of-terms
problem in this new AI domain. No one else particularly cares. Having created
Frankenstein, we need a deterministic method of deciding if we need to worry
about Frank. Is he alive, is he sentient? Is that a good thing or no?
Start at ChatGPT Prompt: discuss the fundamental components of human
intelligence
Or words to that effect.
Les
> C2 - Confidential
>
> If intelligence is the ability to solve complex problems, what is a complex
problem?
>
> Attention : due to increased cybersecurity screening I may receive external
emails 30 minutes after their emission.
> RESTRICTED
>
> Bertrand RICQUE
> Safety and security standardisation Expert
> Program Support Manager
> Optronics and Defence Division/Customer Support
> Safran Electronics & Defense
>
> P +33 (0)1 58 11 96 82 M +33 (0)6 87 47 84 64
> bertrand.ricque at safrangroup.com
> 102 Avenue de Paris
> 91300 MASSY FRANCE
> www.safran-electronics-defense.com
>
> > -----Message d'origine-----
> > De : systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de>
> > De la part de Les Chambers
> > Envoyé : vendredi 30 juin 2023 06:12
> > ÃÂ : Steve Tockey <steve.tockey at construx.com>;
> > systemsafety at lists.techfak.uni-bielefeld.de
> > Objet : Re: [SystemSafety] AI and the virtuous test Oracle - intelligence
> >
> > CAUTION: This message originated from an outside organization. In case of
> > suspicion, click on "Report to SAFRAN Security" from the Outlook ribbon.
> >
> > Steve
> >
> > RE : â I argue, human intelligence is not something that a computer could
> > ever actually be capable ofâ
> >
> > According to Max Tegmark the jury is still out on this question.
> > Max is a professor of physics at MIT and president of the Future of Life
> > Institute (ref Tegmark, Max. Life 3.0, Penguin Books Ltd). His claim to
fame is
> > that he was in the room when Larry Page called Elon Musk a speceist.
> >
> > Max provides evidence that the nature of intelligence is a subject for
debate
> > - as yet, unresolved.
> > Max: âMy wife and I recently had the good fortune to attend a symposium
> > on artificial intelligence organized by the Swedish Nobel Foundation, and
> > when a panel of leading AI researchers were asked to define intelligence,
> > they argued at length without reaching consensus.
> > â¦
> > there are many competing ones, including capacity for logic, understanding,
> > planning, emotional knowledge, self-awareness,creativity, problem solving
> > and learning.â
> >
> > Maxâs pet definition is:
> > âintelligence = ability to accomplish complex goalsâ
> >
> > Max makes some outrageous assertions:
> > âThis substrate [platform] independence of computation implies that AI
is
> > possible: intelligence doesnât require flesh, blood or carbon atoms.â
> > Max uses the following metaphor:
> > âWaves, for instance: they have properties such as speed, wavelength and
> > frequency, and we physicists can study the equations they obey without
> > even needing to know what particular substance theyâre waves in.â
> >
> > Oh well, they say, âAll great truths begin as blasphemiesâ (George
Bernard
> > Shaw)
> >
> > On the subject of determinism, Max goes on:
> > âJust as we donât fully understand how our children learn, we still
donât fully
> > understand how such neural networks learn, and why they occasionally fail.
> > â¦
> > Stuart Russell told me that he and many of his fellow AI researchers had
> > recently experienced a âholy s**t!â (HS) moment, when they witnessed
AI
> > doing something they werenât expecting to see for many years.â
> >
> > My question here is: âHow do you run a hazard analysis on a system you
> > don't understand that is likely, for further unknown reasons, to
occasionally
> > fail? â
> >
> > This situation would be amusing but for the fact, âtheyâ are already
amongst
> > us. Cruise (General Motors), Waymo (Google), and Zoox (Amazon) are
> > currently operating driverless taxis on the streets of San Francisco.
YouTube
> > reports that San Franciscans amuse themselves by jumping in front of them
> > to see if they will automatically stop?????
> >
> > This AI non-determinism is a fact, confirmed by all practitioners who feel
free
> > to speak out. It's one of the core reasons why some senior practitioners
have
> > left developers like OpenAI and Google Deep Mind. It works too well and
> > they don't know why - the neural network is a very simple approximation to
> > the human brain. It shouldn't work that well -, and they don't know what it
> > might be capable of in the future. Which brings me to the unbounded nature
> > of artificial intelligence.
> >
> > From Max:
> > âNobody knows for sure what the next blockbuster computational substrate
> > will be, but we do know that weâre nowhere near the limits imposed by the
> > laws of physics. My MIT colleague Seth Lloyd has worked out what this
> > fundamental limit is, and as weâll explore in greater detail in chapter
6, this
> > limit is a whopping 33 orders of magnitude (1033 times) beyond todayâs
state
> > of the art for how much computing a clump of matter can do. So even if we
> > keep doubling the power of our computers every couple of years, it will
take
> > over two centuries until we reach that final frontier.â
> >
> > The debate continues. Keep your powder dry Steve.
> >
> > Les
> >
> >
> > > Les,
> > >
> > > ââ¬Åsilicon is unboundedââ¬Â
> > >
> > > I disagree. Silicon is bounded, just in different ways than humans.
> > > For one,
> > Turing Computability. Turing Machines, which all modern silicon computers
> > are a kind of, are 100% deterministic. Human intelligence is non-
> > deterministic. Thus, I argue, human intelligence is not something that a
> > computer could ever actually be capable of. See, for example:
> > >
> > > https://urldefense.com/v3/__https://youtu.be/i2trJEIFIvY__;!!Dl6pPzL6!
> > > bn6asG-CYm5skC6Lsuyhr8yoTygNEmMb1gJEA44Pu29qIyEA6O5jlg2-
> > acWGV_U1rHs-gU
> > > ayFEu2F7MuSc0$
> > >
> > > I agree that, ââ¬ÅWe are embarked; we need to deal with itââ¬Â.
But we
> > > also
> > have to be aware of the limits of so-called AI.
> > >
> > > ââ¬â steve
> > >
> > > On Jun 26, 2023, at 9:15 PM, Les Chambers <les at chambers.com.au> wrote:
> > >
> > > Hi
> > >
> > > I put it to you and the list in general, a "discuss later" mentality
> > > is
> > foolish
> > > in the extreme. The process of deploying intelligence in automated
> > > systems
> > has
> > > , and will, fundamentally change. Among other things, international
> > > bodies that currently regulate software-intensive Safety-Critical
> > > systems - who
> > cling
> > > to regulating processes that have ceased to exist - are likely to be
> > > overrun and made redundant.
> > >
> > > In favour of organisations such as:
> > >
> > > - The Center for Human-Compatible AI at UC Berkeley
> > > - The Future of Life Institute
> > > - The Center for AI Safety (CAIS)
> > > - Stanford Center for AI Safety
> > >
> > > My view is that this is not a steady-as-she-goes situation. This is a
> > > major inflection point in the evolution of intelligence. Carbon hosts
> > > will always
> > be
> > > limited; silicon is unbounded. We are embarked; we need to deal with it.
> > >
> > > Suggested reading: Max Tegmark, Life 3.0
> > >
> > > Les
> > >
> > > C2 - Confidential
> > >
> > > Hello,
> > >
> > > For the moment, I don't see in industry any attempt to realise E/EPE
> > > safety related function with On-line AI. All what I see is focused on
> > > off-line AI, meaning that the training is done specified training data
> > > sets, and the validation is done on specified test datasets. We don't
> > > see any performance better than roughly 3.3 10-3 in automotive. So
> > > let's wait for the achievement of 10-8 to 10-5 error rate on test
> > > datasets and discusss later the acceptability.
> > >
> > > AttentionÃâ : due to increased cybersecurity screening I may receive
> > > external emails 30 minutes after their emission.
> > > RESTRICTED
> > >
> > > Bertrand RICQUE
> > > Safety and security standardisation Expert Program Support Manager
> > > Optronics and Defence Division/Customer Support Safran Electronics &
> > > Defense
> > >
> > > P +33 (0)1 58 11 96 82 M +33 (0)6 87 47 84 64
> > > bertrand.ricque at safrangroup.com
> > > 102 Avenue de Paris
> > > 91300 MASSY FRANCE
> > > https://urldefense.com/v3/__http://www.safran-electronics-
> > defense.com_
> > > _;!!Dl6pPzL6!bn6asG-
> > CYm5skC6Lsuyhr8yoTygNEmMb1gJEA44Pu29qIyEA6O5jlg2-a
> > > cWGV_U1rHs-gUayFEu2JTLU4H4$
> > >
> > > -----Message d'origine-----
> > > Dẻ : Les Chambers <les at chambers.com.au> Envoỷ̮̩ : lundi
26 juin
> > > 2023 06:03 ÃÆ?Ãâ : RICQUE Bertrand (SAFRAN ELECTRONICS & DEFENSE)
> > > <bertrand.ricque at safrangroup.com>; les at chambers.com.au;
> > > koopman.cmu at gmail.com; systemsafety at lists.techfak.uni-bielefeld.de
> > > ObjetÃâ : RE: [SystemSafety] AI and the virtuous test Oracle
> > >
> > > CAUTION: This message originated from an outside organization. In
> > > case of suspicion, click on "Report to SAFRAN Security" from the Outlook
> > ribbon.
> > >
> > > RE your comment: â?oas it impossible, by construction, to identify
> > > all dangerous situationsâ?ÃÂ
> > >
> > > True. But it is possible to identify the subset of highest probability
> > > â?odangerous situationsâ?àand deal with them. Ãâ I had 10
years of
> > > experience with this issue in computer control of chemical processing
> > > reactors. In the 1970s, we had a safety authority write software to:
> > > 1. identify well- established dangerous states of a reactor; 2.
> > > Transition reactor to a safe state â?" usually shut down. It was
> > > called the â?oabortâ?àprogramming. This abort code overrode
the
> > > control software written by another team. By today's standards it was
> > > primitive but still very effective.
> > > This experience has influenced my thinking on ways and means of
> > > dealing with â?obadâ?àor â?ohallucinatingâ?àAIs. We
need a
> > > separate and
> > distinct
> > > â?oabortâ?àAI
> > > capable of recognising evil and dealing with it.
> > > Sal Khan (Khan Academy) has implemented another idea in creating his
> > > Khanmigo personal tutor. To improve its accuracy in mathematics, he
> > > allows the AI to have â?opersonal thoughtsâ?ÃÂ. The AI
equivalent of
> > > mindfulness (straight out of Marcus Aurelius). He gives the impression
> > > that he feeds back the AI's response to the AI, so the AI can have a
> > > think about what it is thinking. A bit like briefing a fish on the
> > > concept of water.
> > >
> > > This supports my pitch for engineers to be taught philosophy. The
> > > concepts of virtue and evil were defined simply 2000 years ago. These
> > > definitions have stood outside of time and can be implemented with
> > > appropriate pattern matching.
> > > If Marcus Aurelius was reincarnated, he'd be running international
> > > seminars on mindfulness. The foundation ideas have not changed a wit.
> > >
> > > Cheers
> > > Les
> > >
> > > C2 - Confidential
> > >
> > > Answer to the first question. Once the NN is trained, it will always
> > > give the same output for the same input. It is stupid software logic.
> > >
> > > Answer to the second question. The current safety paradigm is based on
> > > perception-interpretation-decision-action loops based on the fact that:
> > > 1 Either, all states of the system, leading to all identified losses,
> > > can be exhaustively defined and their probability of occurrences
> > > remains, by concept and construction of the system, below a tolerable
> > > level, enabling fully autonomous functions (e.g. emergency shutdown
> > > functions in process industries),
> > >
> > > 2 Or, there is a human in control of the system with the ability of
> > > arbitrating the decisions, meaning the ability to at least perceive
> > > and interpret.
> > >
> > > AI deceives point 1 as it impossible, by construction, to identify all
> > > dangerous situations (the training dataset is incomplete and whatever
> > > its size, the operational domain being infinite (or close to infinite
> > > from a practical point of view), training dataset size divided by
> > > infinite = 0.
> > >
> > > To deceive the second assumption, just replace a car windscreen by a
> > > video screen. No need of AI.
> > >
> > > AttentionÃÆ,Ãâ : due to increased cybersecurity screening I may
> > > receive external emails 30 minutes after their emission.
> > > RESTRICTED
> > >
> > > Bertrand RICQUE
> > > Safety and security standardisation Expert Program Support Manager
> > > Optronics and Defence Division/Customer Support Safran Electronics &
> > > Defense
> > >
> > > P +33 (0)1 58 11 96 82 M +33 (0)6 87 47 84 64
> > > bertrand.ricque at safrangroup.com
> > > 102 Avenue de Paris
> > > 91300 MASSY FRANCE
> > > https://urldefense.com/v3/__http://www.safran-electronics-
> > > defense.com_
> > > _;!!Dl6pPzL6!fVuNoqxkdCCqescc5hd8-9ke7-
> > > o3uVuZEOBtugdgtfrHUBcO3T2a6LyI_
> > > lZBwk7CZBJRmOhVQnZHXCdvIB_h5MY$
> > >
> > > -----Message d'origine-----
> > > DeÃÆ,Ãâ : Les Chambers <les at chambers.com.au>
Envoy̮f̩̮̉,̉ :
> > jeudi
> > > 22
> > juin
> > > 2023 23:46 ÃÆfâ,ìÃÆ,Ãâ : RICQUE Bertrand (SAFRAN ELECTRONICS
&
> > > DEFENSE) <bertrand.ricque at safrangroup.com>;
> > koopman.cmu at gmail.com;
> > > systemsafety at lists.techfak.uni-bielefeld.de
> > > ObjetÃÆ,Ãâ : RE: [SystemSafety] AI and the virtuous test Oracle
> > >
> > > CAUTION: This message originated from an outside organization. In
> > > case of suspicion, click on "Report to SAFRAN Security" from the
> > > Outlook ribbon.
> > >
> > > RE your comment, ÃÆââ,ìÃâ¦"A neural network is software
> > logic.ÃÆââ,ìÃâÃÂ
> > >
> > > A neural Network is a component of an AI agent. An AI agent is a suite
> > > of general purpose software tools. The agent is configured from a
> > > dataset.
> > >
> > > ChatGPT goes on (see dialog below): ÃÆââ,ìÃâ¦"Synthesizing
an AI
> > > agent involves training the machine learning model on a large
> > > dataset.ÃÆââ,ìÃâÃÂ
> > >
> > > This indicates that the training dataset determines the
> > > agentÃÆââ,ìâ"âs behaviour (transfer function - inputs to
> > > outputs). In the case of a Tesla motor vehicle, the inputs are camera
> > > sensors and the outputs will soon be controls to the accelerator,
> > > brake and steering.
> > >
> > > RE your comment, ÃÆââ,ìÃâ¦"The problem is not AI. It is the
> > > exhaustive deterministic relationship between the undesired losses and
> > > the system states.
> > > Anything breaking this, breaks the current safety paradigm. And AI is
> > > not the single thing to achieve this.ÃÆââ,ìÃâÃÂ
> > >
> > > Could you please expand on this idea. Possibly using an example.
> > >
> > > How would your thought apply here: I am a driver at the wheel of a
> > > Tesla travelling at 110 km/h on the freeway. I have switched the
> > > vehicle to full
> > > self-
> > > driving mode. I am fast asleep. The FSD agent detects a false positive
> > > ÃÆââ,ìâ?o a lady with a pram. It jams on the brakes
initiating a
> > > rear-end collision with a following semi.
> > > Q: In this case how is it that, ÃÆââ,ìÃâ¦"The problem is not
> > AIÃÆââ,ìÃâÃÂ
> > >
> > > Les
> > >
> > >
> > ÃÆââ,ìâ?ÃÂÃÆââ,ìâ?ÃÂÃÆââ,ìâ?
ÃÂÃÆââ,ìâ?ÃÂÃÆÃ
> > ¢Ã¢,ìâ?ÃÂ
> > ChatGPT Dialog
> > >
> > ÃÆââ,ìâ?ÃÂÃÆââ,ìâ?ÃÂÃÆââ,ìâ?
ÃÂÃÆââ,ìâ?ÃÂÃÆÃ
> > ¢Ã¢,ìâ?
> > ÃÂÃÆââ,ìâ?ÃÂ
> > >
> > > ChatGPT prompt:
> > > ÃÆââ,ìÃâ¦"Describe an AI agent from a technical perspective.
What
> > > are its components and how is it synthesized?ÃÆââ,ìÃâÃÂ
> > >
> > > Response:
> > > An AI agent, from a technical perspective, typically consists of
> > > several components that work together to perform specific tasks.
> > > Here are the key components and how they are synthesized:
> > >
> > > 1. Data: The foundation of any AI agent is data. It can be in various
> > > forms such as text, images, audio, or structured data. This data is
> > > used for training and providing context to the AI model.
> > >
> > > 2. Machine Learning Model: The AI agent utilizes a machine learning
> > > model to process and generate responses. These models are trained on
> > > vast amounts of data and learn patterns to make predictions or
> > > generate outputs.
> > >
> > > 3. Natural Language Processing (NLP): NLP is a crucial component for
> > > understanding and processing human language. It involves tasks like
> > > language translation, sentiment analysis, intent recognition, and
> > > entity extraction.
> > > NLP
> > > enables the AI agent to comprehend user inputs and generate relevant
> > > responses.
> > >
> > > 4. Knowledge Representation: Knowledge representation refers to how
> > > information is stored and organized within the AI agent. It can employ
> > > various techniques such as semantic networks, ontologies, or graph
> > > databases to store facts and relationships. This allows the agent to
> > > access and utilize knowledge when responding to user queries.
> > >
> > > 5. Dialogue Management: Dialogue management focuses on handling the
> > > flow of conversation. It involves tracking the dialogue context,
> > > managing user requests and responses, and generating meaningful and
> > > coherent replies.
> > > Reinforcement
> > > learning techniques can be applied to improve dialogue management.
> > >
> > > 6. Integration and APIs: AI agents can integrate with external systems
> > > and services using APIs (Application Programming Interfaces). This
> > > allows them to access relevant data sources, databases, or external
> > > functionalities to provide more comprehensive and accurate responses.
> > >
> > > 7. User Interface: The AI agent needs a user interface to interact
> > > with users effectively. It can be a chat interface, voice-based
> > > interface, or a combination of both. The UI should provide a seamless
> > > experience for users to interact with the agent.
> > >
> > > Synthesizing an AI agent involves training the machine learning model
> > > on a large dataset. This process includes steps like preprocessing the
> > > data, designing the model architecture, selecting appropriate
> > > algorithms, training the model using optimization techniques, and
> > > fine-tuning it for specific tasks.
> > > The trained model is then integrated with other components, such as
> > > NLP, knowledge representation, and dialogue management, to create a
> > > functioning AI agent. Regular updates, feedback loops, and continuous
> > > improvement are essential to enhance the agent's performance over
> > > time.
> > >
> > > --------END Chat ----------
> > >
> > > C2 - Confidential
> > >
> > > The problem is not AI. It is the exhaustive deterministic relationship
> > > between the undesired losses and the system states. Anything breaking
> > > this, breaks the current safety paradigm. And AI is not the single
> > > thing to achieve this.
> > >
> > > Attention̮f̢?s̮,̉ : due to increased cybersecurity screening
I may
> > > receive external emails 30 minutes after their emission.
> > > RESTRICTED
> > >
> > > Bertrand RICQUE
> > > Safety and security standardisation Expert Program Support Manager
> > > Optronics and Defence Division/Customer Support Safran Electronics &
> > > Defense
> > >
> > > P +33 (0)1 58 11 96 82 M +33 (0)6 87 47 84 64
> > > bertrand.ricque at safrangroup.com
> > > 102 Avenue de Paris
> > > 91300 MASSY FRANCE
> > > https://urldefense.com/v3/__http://www.safran-electronics-
> > > defense.com__;!!Dl6pPzL6!bg5nEqH6ID136htdDa-
> > >
> > >
> > DgyRz2IZw9arqA9HPKC3p01ZYHoQhzebcyOn5xrgApDNO52A_sLRMk2YgX63j
> > > n5fk7M4$
> > >
> > > -----Message d'origine-----
> > > De̮f̢?s̮,̉ : systemsafety <systemsafety-
bounces at lists.techfak.uni-
> > > bielefeld.de>
> > > De la part de Phil Koopman
> > > Envoy̮f̉ '̮,̩̮̉f̢?s̮,̉ : jeudi 22 juin 2023
03:32
> > > ÃÆfÃâ 'ÃÆââ?sÃâìÃÆfâ?sÃÆ,Ãâ :
> > > les at chambers.com.au; systemsafety at lists.techfak.uni- bielefeld.de
> > > Objet̮f̢?s̮,̉ : Re: [SystemSafety] AI and the virtuous test
Oracle
> > >
> > > CAUTION: This message originated from an outside organization.
> > > In case
> > > of
> > > suspicion, click on "Report to SAFRAN Security" from the Outlook
> > > ribbon.
> > >
> > > Les,
> > >
> > > Since you welcome riffs, I have something that is not as all-
> > > encompassing, but might have more immediate application.
> > >
> > > I propose that to the degree that "AI" technology is deployed in a way
> > > that supplants practical human judgement, the manufacturer of that
> > > system (in some cases just the AI part if it is an add-on component)
> > > should be held accountable for any action (or inaction) that, if
> > > associated with the human that was supplanted, would have constituted
> > > negligence.̮f̢?s̮,̉ This should include situations in which
a
> > > human is put in an untenable situation of supervising an AI in a way
> > > that puts unreasonable demands upon them, amounting to a "moral
> > > crumple zone"
> > > approach
> > >
> > > (https://urldefense.com/v3/__https://papers.ssrn.com/sol3/papers.cfm
> > > ?ab
> > >
> > >
> > stract_id=2757236__;!!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4
> > > gg9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-DS0QOuOFaUivQZX7h$ ).
> > > Liability/negligence if an AI is in substantive control of such a
> > > situation should attach to the manufacturer.
> > >
> > > This leads to a more narrow oracle, but perhaps still useful, than you
> > > propose. If a loss event is caused by a lack of "reasonable" behavior
> > > by an AI, the manufacturer is on the hook for negligence, and the
> > > AI/manufacturer owes a duty of care the same as the human who was
> > > supplanted would have owed to whoever/whatever might be affected by
> > > that negligence.
> > > It has
> > > the
> > > advantage of reusing existing definitions of "reasonable person"
> > > that
> > > have
> > > been hammered out over decades of law. (To be sure that is not in the
> > > form of an engineering specification, but case law has a pretty robust
> > > set of precedents, such as crashing into something after your properly
> > > functioning vehicle ran a red light is likely to lead to the driver
> > > being found
> > > negligent.)
> > >
> > > This does not require the AI to behave the same as people, and is not
> > > a full recipe for "safe" AI. But it puts a floor on things in a way
> > > that is readily actionable using existing legal mechanisms and
> > > theories. If a reasonable person would have avoided a harm, any AI
> > > that fails to avoid the harm would be negligent.
> > >
> > > I've worked with a lawyer to propose this approach for automated
> > > vehicles, and it is starting to get some traction. What I write in
> > > this post
> > > (above)
> > > is a
> > > generalization of the concept beyond the narrow automated vehicle
> > > application.
> > > Details here:
> > >
> > > https://urldefense.com/v3/__https://safeautonomy.blogspot.com/2023/0
> > > 5
> > > /a-liability-approach-for-
> > >
> > >
> > automated.html__;!!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4gg
> > > 9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-DS0QOuOFaUh295b5I$
> > >
> > > -- Phil
> > >
> > > On 6/21/2023 7:14 PM, Les Chambers wrote:
> > > Hi All
> > >
> > > I find myself reflecting on what will become of us.
> > > As systems engineering best practice is overrun by AI.
> > >
> > > Practitioners report that neural networks are eating code.
> > > Example 1: The vector field surrounding a Tesla motor vehicle is an
> > > output of a neural network, not the result of software logic. Soon the
> > > neural net - not code - will generate controls. The size of the code
> > > base is reducing. (Elon
> > > Musk)
> > > Example 2: the ChatGPT transformer code base is only 2000 LOC (Mo
> > > Gawdat
> > > https://urldefense.com/v3/__https://youtu.be/bk-
> > > nQ7HF6k4__;!!Dl6pPzL6!
> > >
> > >
> > dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4gg9Cz063kcikC8CIr0YMf2lF9o5xN
> > > rnA0Av
> > > -DS0QOuOFaUpIawiVG$ )
> > >
> > > The intelligence resides in terabytes of data, perceptrons and
> > > millions of weighting parameters. All are gathered by automated means.
> > > Not subject to human review.
> > >
> > > Ergo what will become of our trusty barriers to dangerous
> > > failure:
> > > 1. Safety functions - gone
> > > 2. Verification - gone
> > > 3. Code reviews - gone
> > > 4. Validation - How?
> > >
> > > On validation, may I suggest the moral AI. A test oracle built on a
> > > virtuous dataset, capable of interrogating the target system to
> > > determine virtue. Test outcomes will morph from pass/failure to
> > > moral/immoral.
> > >
> > > Credible industry players have predicted that soon we will have AIs
> > > orders of magnitude smarter than us. Especially when they start
> > > talking to each other.
> > > The bandwidth will be eye-watering - the increase in intelligence,
> > > vertical.
> > >
> > > New barriers are required. Time to develop an AI that is on our side
> > > ÃÆfÃââÃÆââ?sÃâìÃÆââ,ìÃâ¦"
> > > the side of ethics and the moral life. An adult in the room if you
> > > like. We should birth this creature now and raise it as good parents.
> > >
> > > Let us not panic. May I put the proposition: virtue, like creativity,
> > > can be algorithmic.
> > > I have a sense of starting from the beginning - tabula rasa. I suggest
> > > that high-level thinking on the subject could begin with ChatGPT
> > > prompts:
> > > 1. What is the stoic philosopher̮f̢̢̢̮̉?
sÃâìÃÆââ?zÃââs
> > > concept of virtue?
> > > 2. What are the elements of philosophy relevant to AI?
> > >
> > > Let us not forget our engineering mission: Guardians of the divine
> > > Logos, the organizing principle of the universe, responsible for its
> > > creation, maintenance, and order.
> > >
> > > Would anyone care to riff on this?
> > >
> > > Les
> > >
> > > --
> > >
> > > Les Chambers
> > >
> > > les at chambers.com.au
> > > systemsengineeringblog.com
> > >
> > > +61 (0)412 648 992
> > > _______________________________________________
> > > The System Safety Mailing List
> > > systemsafety at TechFak.Uni-Bielefeld.DE
> > > Manage your subscription:
> > > https://urldefense.com/v3/__https://lists.techfak.uni-
> > > bielefeld.de/mai
> > >
> > >
> > lman/listinfo/systemsafety__;!!Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaC
> > > GJCz
> > > dLlz4gg9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-
> > > DS0QOuOFaUjXMdlNF$
> > >
> > > --
> > > Prof. Phil Koopman koopman at cmu.edu
> > > (he/him)
> > >
> > >
> > https://urldefense.com/v3/__https://users.ece.cmu.edu/*koopman/__;fg!
> > >
> > >
> > !Dl6pPzL6!dV6V79CEWJVLcdXXS5n2wYWdaCGJCzdLlz4gg9Cz063kcikC8CIr0YM
> > > f2lF9o5xNrnA0Av-DS0QOuOFaUnsUrevc$
> > >
> > > _______________________________________________
> > > The System Safety Mailing List
> > > systemsafety at TechFak.Uni-Bielefeld.DE
> > > Manage your subscription:
> > > https://urldefense.com/v3/__https://lists.techfak.uni-
> > >
> > > bielefeld.de/mailman/listinfo/systemsafety__;!!Dl6pPzL6!dV6V79CEWJVL
> > > cd
> > > XXS5n2wYWdaCGJCzdLlz4gg9Cz063kcikC8CIr0YMf2lF9o5xNrnA0Av-
> > > DS0QOuOFaUjXMdlNF$
> > >
> > > #
> > > " Ce courriel et les documents qui lui sont joints peuvent contenir
> > > des informations confidentielles, ̮f̉ '̮,̻̉tre soumis aux
> > > r̮f̉ '̮,̬̉glementations relatives au
contr̮f̉ '̮,̫̉le des
> > > exportations ou ayant un caract̮f̉ '̮,̬̉re
priv̮f̉ '̮,̩̉.
> > > S'ils ne
> > > vous sont
> > > pas destin̮f̉ '̮,̩̉s, nous vous signalons qu'il est
strictement
> > > interdit de les divulguer, de les reproduire ou d'en utiliser de
> > > quelque mani̮f̉ '̮,̬̉re que ce soit le contenu. Toute
exportation
> > > ou r̮f̉ '̮,̩̉exportation non
autoris̮f̉ '̮,̩̉e est interdite
> > > Si ce message vous a ̮f̉ '̮,̩̉t̮f̉ '̮,̩̉
transmis par erreur,
> > > merci d'en informer l'exp̮f̉ '̮,̩̉diteur et de supprimer
> > > imm̮f̉ '̮,̩̉diatement de votre
syst̮f̉ '̮,̬̉me informatique
> > ce
> > > courriel ainsi que tous les documents qui y sont
> > > attacḫf̉ '̮,̩̉s."
> > > ******
> > > " This e-mail and any attached documents may contain confidential or
> > > proprietary information and may be subject to export control laws and
> > > regulations. If you are not the intended recipient, you are notified
> > > that any dissemination, copying of this e-mail and any attachments
> > > thereto or use of their contents by any means whatsoever is strictly
> > > prohibited. Unauthorized export or re-export is prohibited. If you
> > > have received this e-mail in error, please advise the sender
> > > immediately and delete this e-mail and all attached documents from
> > > your computer system."
> > > #
> > >
> > > --
> > >
> > > Les Chambers
> > >
> > > les at chambers.com.au
> > >
> > > +61 (0)412 648 992
> > >
> > > #
> > > " Ce courriel et les documents qui lui sont joints peuvent contenir
> > > des informations confidentielles, ̮f̻̉tre soumis aux
> > > r̮f̬̉glementations relatives au contr̮f̫̉le des
exportations ou
> > > ayant un caract̮f̬̉re priv̮f̩̉.
> > S'ils ne
> > > vous sont
> > > pas destin̮f̩̉s, nous vous signalons qu'il est strictement
interdit
> > > de les divulguer, de les reproduire ou d'en utiliser de quelque
> > > mani̮f̬̉re que ce soit le contenu. Toute exportation ou
> > > r̮f̩̉exportation non autoris̮f̩̉e est interdite Si ce
message
> > > vous a ̮f̩̉t̮f̩̉ transmis par erreur, merci
> > d'en
> > > informer l'exp̮f̩̉diteur et de supprimer
imm̮f̩̉diatement de
> > > votre syst̮f̬̉me informatique ce courriel ainsi que tous les
> > > documents qui y
> > sont
> > > attacḫf̩̉s."
> > > ******
> > > " This e-mail and any attached documents may contain confidential or
> > > proprietary information and may be subject to export control laws and
> > > regulations. If you are not the intended recipient, you are notified
> > > that any dissemination, copying of this e-mail and any attachments
> > > thereto or use of their contents by any means whatsoever is strictly
> > > prohibited. Unauthorized export or re-export is prohibited. If you
> > > have received this e-mail in error, please advise the sender
> > > immediately and delete this e-mail and all attached documents from
> > > your computer system."
> > > #
> > >
> > > --
> > >
> > > Les Chambers
> > >
> > > les at chambers.com.au
> > >
> > > +61 (0)412 648 992
> > >
> > > #
> > > " Ce courriel et les documents qui lui sont joints peuvent contenir
> > > des informations confidentielles, ̮̻tre soumis aux
> > > r̮̬glementations relatives
> > au
> > > contr̮̫le des exportations ou ayant un caract̮̬re
priv̮̩. S'ils
> > > ne vous
> > sont
> > > pas destin̮̩s, nous vous signalons qu'il est strictement interdit
de
> > > les divulguer, de les reproduire ou d'en utiliser de quelque
> > > mani̮̬re que ce
> > soit
> > > le contenu. Toute exportation ou r̮̩exportation non
autoris̮̩e est
> > interdite Si
> > > ce message vous a ̮̩t̮̩ transmis par erreur, merci d'en
informer
> > l'exp̮̩diteur
> > > et de supprimer imm̮̩diatement de votre syst̮̬me informatique
ce
> > > courriel
> > ainsi
> > > que tous les documents qui y sont attacḫ̩s."
> > > ******
> > > " This e-mail and any attached documents may contain confidential or
> > > proprietary information and may be subject to export control laws and
> > > regulations. If you are not the intended recipient, you are notified
> > > that any dissemination, copying of this e-mail and any attachments
> > > thereto or use of their contents by any means whatsoever is strictly
> > > prohibited. Unauthorized export or re-export is prohibited. If you
> > > have received this e-mail in error, please advise the sender
> > > immediately and delete this e-mail and all attached documents from your
> > computer system."
> > > #
> > >
> > > --
> > >
> > > Les Chambers
> > >
> > > les at chambers.com.au
> > >
> > > +61 (0)412 648 992
> > > _______________________________________________
> > > The System Safety Mailing List
> > > systemsafety at TechFak.Uni-Bielefeld.DE
> > > Manage your subscription:
> > > https://urldefense.com/v3/__https://lists.techfak.uni-__;!!Dl6pPzL6!bn
> > > 6asG-CYm5skC6Lsuyhr8yoTygNEmMb1gJEA44Pu29qIyEA6O5jlg2-
> > acWGV_U1rHs-gUay
> > > FEu2erPqUDs$
> > bielefeld.de/mailman/listinfo/systemsafety
> >
> >
> >
> > --
> >
> > Les Chambers
> >
> > les at chambers.com.au
> >
> > +61 (0)412 648 992
> >
> >
> > _______________________________________________
> > The System Safety Mailing List
> > systemsafety at TechFak.Uni-Bielefeld.DE
> > Manage your subscription:
> > https://urldefense.com/v3/__https://lists.techfak.uni-
> > bielefeld.de/mailman/listinfo/systemsafety__;!!Dl6pPzL6!bn6asG-
> > CYm5skC6Lsuyhr8yoTygNEmMb1gJEA44Pu29qIyEA6O5jlg2-acWGV_U1rHs-
> > gUayFEu2N9KrCOE$
>
> #
> " Ce courriel et les documents qui lui sont joints peuvent contenir des
informations confidentielles, être soumis aux règlementations relatives au
contrôle des exportations ou ayant un caractère privé. S'ils ne vous sont
pas destinés, nous vous signalons qu'il est strictement interdit de les
divulguer, de les reproduire ou d'en utiliser de quelque manière que ce soit
le contenu. Toute exportation ou réexportation non autorisée est interdite Si
ce message vous a été transmis par erreur, merci d'en informer l'expéditeur
et de supprimer immédiatement de votre système informatique ce courriel ainsi
que tous les documents qui y sont attachés."
> ******
> " This e-mail and any attached documents may contain confidential or
proprietary information and may be subject to export control laws and
regulations. If you are not the intended recipient, you are notified that any
dissemination, copying of this e-mail and any attachments thereto or use of
their contents by any means whatsoever is strictly prohibited. Unauthorized
export or re-export is prohibited. If you have received this e-mail in error,
please advise the sender immediately and delete this e-mail and all attached
documents from your computer system."
> #
--
Les Chambers
les at chambers.com.au
+61 (0)412 648 992
More information about the systemsafety
mailing list