<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#0432ff" bgcolor="#feffff">
<p>On 14/01/2026 16:23, Derek M Jones wrote:</p>
<blockquote type="cite"
cite="mid:bb067652-95e7-43b0-aa1b-873eee09234a@knosof.co.uk">You
have probably heard about how LLMs have been solving
<br>
mathematical problems, and how the LLMs have used the Lean
<br>
proof assistant to verify the correctness of the proof.
<br>
<br>
Guess what? Not only do LLMs generate buggy code, they also
<br>
generate buggy proofs. A particularly insidious bug occurs
<br>
when the LLM rewrites the original to create something that is
<br>
easier to prove, but does not prove that the rewritten
<br>
question is the same as the original one (sometimes it isn't).
<br>
<br>
Lots of discussion and links here
<br>
<a class="moz-txt-link-freetext" href="https://www.lesswrong.com/posts/rhAPh3YzhPoBNpgHg/lies-damned-lies-and-proofs-formal-methods-are-not-slopless">https://www.lesswrong.com/posts/rhAPh3YzhPoBNpgHg/lies-damned-lies-and-proofs-formal-methods-are-not-slopless</a>
<br>
<br>
...</blockquote>
<p>Alan Turing raised the interesting question of whether human
intelligence requires a random element that can also lead to
errors. In a 1947 lecture to the London Mathematical Society,
Turing said “if a machine is expected to be infallible, it cannot
also be intelligent. There are several mathematical theorems that
say almost exactly that. But these theorems say nothing about how
much intelligence may be displayed if a machine makes no pretence
at infallibility.”</p>
<p>Martyn</p>
<p><br>
</p>
</body>
</html>