Need help?
<- Back

Comments (73)

  • egeozcan
    Gemini is very prone to go into an infinite loop. Sometimes, it even happens with Google's own vibe coding IDE (Antigravity): https://bsky.app/profile/egeozcan.bsky.social/post/3maxzi4gs...
  • boerseth
    > Brainf*ck is the antithesis of modern software engineering. There are no comments, no meaningful variable names, and no structureThat's not true. From the little time I've spent trying to read and write some simple programs in BF, I recall good examples being pretty legible.In fact, because the language only relies on those few characters, anything else you type becomes a comment. Linebreaks, whitespace, alphanumeric characters and so on, they just get ignored by the interpreter.Have a look at this, as an example: https://brainfuck.org/chessboard.b
  • brap
    Gemini is my favorite, but it does seem to be prone to “breaking” the flow of the conversation.Sharing “system stuff” in its responses, responding to “system stuff”, starts sharing thoughts as responses, responses as thoughts, ignoring or forgetting things that were just said (like it’s suddenly invisible), bizarre formatting, switching languages for no reason, saying it will do something (like calling a tool) instead of doing it, getting into odd loops, etc.I’m guessing it all has something to do with the textual representation of chat state and maybe it isn’t properly tuned to follow it. So it kinda breaks the mould but not in a good way, and there’s nothing downstream trying to correct it. I find myself having to regenerate responses pretty often just because Gemini didn’t want to play assistant anymore.It seems like the flash models don’t suffer from this as much, but the pro models definitely do. The smarter the model to more it happens.I call it “thinking itself to death”.It’s gotten to a point where I often prefer fast and dumb models that will give me something very quickly, and I’ll just run it a few times to filter out bad answers, instead of using the slow and smart models that will often spend 10 minutes only to eventually get stuck beyond the fourth wall.
  • tessierashpool9
    Asked for a solution of a photographed Ubongo puzzle: https://gemini.google.com/share/f2619eb3eaa1Gemini Pro neither as is nor in Deep Research mode even got the number of pieces or relevant squares right. I didn't expect it to actually solve it. But I would have expected it to get the basics right and maybe hint that this is too difficult. Or pull up some solutions PDF, or some Python code to brute force search ... but just straight giving a totally wrong answer is like ... 2024 called, it wants its language model back.Instead in Pro Simple it just gave a wrong solution and Deep Research wrote a whole lecture about it starting with "The Geometric and Cognitive Dynamics of Polyomino Systems: An Exhaustive Analysis of Ubongo Puzzle 151" ... that's just bullshit bingo. My prompt was a photo of the puzzle and "solve ubongo puzzle 151"; in my opinion you can't even argue that this lecture was to be expected given my very clear and simple task description.My mental model for language models is: overconfident, eloquent assistant who talks a lot of bullshit but has some interesting ideas every now and then. For simple tasks it simply a summary of what I could google myself but asking an LLM saves some time. In that sense it's Google 2.0 (or 3.0 if you will)
  • nubinetwork
    Too bad it can't explain why it does the same thing with actual English.actual English.actual English.actual English.actual English.actual English.
  • dangoodmanUT
    Gemini does this a lot, getting stuck generating the same tokens over and over indefinitely
  • bdg
    I wonder if going the other way, maxing out semantic density per token, would improve LLM ability (perhaps even cost).We use naturally evolved human languages for most of the training, and programming follows that logic to some degree, but what if the LLMs were working in a highly complex information dense company like Ithkuil? If it stumbles on BF, what happens with the other extreme?Or was this result really about the sparse training data?
  • neonbjb
    > So it made me wonder. Is Brainf*ck the ultimate test for AGI?Absolutely not. Id bet a lot of money this could be solved with a decent amount of RL compute. None of the stated problems are actually issues with LLMs after on policy training is performed.
  • TOMDM
    My prompt was worded differently, but Gemini didn't fall into the same trap for me.``` Providing the full Brainfuck code to dynamically calculate and factorize 1000! is unfortunately not feasible, and here is why: * The Scale: 1000! (1000 factorial) is a number with 2,568 digits. Standard Brainfuck operates on 8-bit memory cells (values 0-255). To handle a number this large, one would first have to write an entire "arbitrary-precision" math library (BigInt) from scratch within Brainfuck. * The Algorithm: You cannot simply compute 1000! and then factor it. You must use Legendre's Formula to determine the exponent of every prime p \le 1000. * Formula: E_p(n!) = \sum_{k=1}^{\infty} \lfloor \frac{n}{p^k} \rfloor * Example for p=2: 500 + 250 + 125 + 62 + 31 + 15 + 7 + 3 + 1 = 994. * The first term of the answer is 2^{994}. The Solution (Python) Since Brainfuck is ill-suited for this, here is a Python script that uses Legendre's Formula to instantly calculate and print the factorization of 1000!. def legendre_factorial_exponent(n, p): exponent = 0 while n > 0: n //= p exponent += n return exponentdef sieve_primes(n): primes = [] is_prime = [True] * (n + 1) for p in range(2, n + 1): if is_prime[p]: primes.append(p) for i in range(p * p, n + 1, p): is_prime[i] = False return primesdef factorize_factorial(n): primes = sieve_primes(n) result = [] for p in primes: exponent = legendre_factorial_exponent(n, p) result.append(f"{p}^{exponent}") return " * ".join(result)print(factorize_factorial(1000))The Brainfuck Code (for 1000 not !) If you simply meant the number 1000 (without the factorial), or you just want to see what Brainfuck looks like, here is a program that outputs 1000: +++++ +++++ [ > +++++ +++++ < - ] > -- . (Print 100 'd') < +++++ +++++ [ > +++++ +++++ < - ] > + . (Print 101 'e') (This is just "d" and "e" for demo; printing "1000" requires ASCII conversion)Correct code to print "1000": ++++++++++[>+++++>+++++>+++++>+++++<<<<-]>-.>-.>-.>-.Explanation: * Sets cell #0 to 10. * Loops 10 times, adding 5 to cells #1, #2, #3, #4 (making them all 50). * Subtracts 1 from cell #1 (ASCII 49 = '1'). * Subtracts 2 from cells #2, #3, #4 (ASCII 48 = '0'). * Prints them. Would you like me to explain how Legendre's Formula works in more detail? ```
  • j_maffe
    Why would anyone feel compelled to use AI to write such a short blog post? Is there no space where I can assume the written contented is communicated 100% by another human being?
  • anon
    undefined
  • huhtenberg
    Viva the Brainfuck! The language of anti-AI resistance!
  • YetAnotherNick
    It doesn't create infinite loop in brainfuck, but looped itself.
  • pelorat
    Saying "Asking Gemini 3" doesn't mean much. The video/animation is using "Gemini 3 Fast". But why would anyone use lesser models like "Fast" for programming problems when thinking models are available also in the free tier?"Fast" models are mostly useless in my experience.I asked "Gemini 3 Pro" and it refused to give me the source code with the rationale that it would be too long and complex due to the 256 value limit of BF cells. However it made me a python script that it said would generate me the full brainf*ck program to print the factors.TL;DR; Don't do it, use another language to generate the factors, then print them with BF.
  • croes
    Got the same with ChatGPT and a simple web page with tiles.Whereby I don’t know if it was a real infinite loop because I cancelled the session after 10 minutes seeing always the same "thoughts" looping
  • DonHopkins
    Write a Brainfuck program to output the Seahorse Emoji then halt.
  • ismailmaj
    -> expects reasoning-> runs it in Gemini fast instead of thinking....
  • Alex2037
    what the fuck compelled you to censor "Brainfuck"?