Finally, after a long time, we got some new GPUS. This allows us to not need to count with the litany of scripts we’ve been using to get some nodes on the JURECA-WestAI supercomputer. What does that entail: - Stability should to up, a lot: by having the models running on nodes allocated only for Blablador 24/7, availability and stability will go up a lot. No more models failing to launch because the machine is too busy. - Performance improvements: by removing the middle-man of proxy jumps and other dark magic I’ve been doing, things should be smoother. This means more tokens for us! - More models: we want to use the new nodes for the latest and greatest models, but now one can use the available time on JURECA-WestAI to run gigantic models experimentally, suck as Kimi-K2.5, for example. As this would be done in a experimental way as Blablador always did, models will come and go according to needs, requests and voices in my head. I would like to Fritz Niesel and Stefan Kesselheim for making possible to have Blablador to run on the WestAI machine. Without that, the Blablador would have been a much smaller dog. Thanks also to Konstantin Rushschanskii, who has been doing magic with kubernetes and who improved model reliability a lot on the weirdest times of the day. Thanks to Tim Kreuzer for the patience he’s been showing us with our random and absurd questions. Thanks to the sysadmins of Jureca and Julich cloud, especially Sebastian Achilles. You guys are the best. Right now, the only models running on the new hardware are GPT-OSS, a personal favorite of many, and Minimax 2.1, which in my opinion is the best model we have right now. Let’s bark this weekend! Alex, Blablador Honcho
And a second special thanks to Stefan Kesselheim, who procured the new GPUs! Without him, we would be in a pickle! Alex
On 1. Feb 2026, at 12:50, Strube, Alexandre <a.strube@fz-juelich.de> wrote:
Finally, after a long time, we got some new GPUS.
This allows us to not need to count with the litany of scripts we’ve been using to get some nodes on the JURECA-WestAI supercomputer.
What does that entail:
- Stability should to up, a lot: by having the models running on nodes allocated only for Blablador 24/7, availability and stability will go up a lot. No more models failing to launch because the machine is too busy.
- Performance improvements: by removing the middle-man of proxy jumps and other dark magic I’ve been doing, things should be smoother. This means more tokens for us!
- More models: we want to use the new nodes for the latest and greatest models, but now one can use the available time on JURECA-WestAI to run gigantic models experimentally, suck as Kimi-K2.5, for example. As this would be done in a experimental way as Blablador always did, models will come and go according to needs, requests and voices in my head.
I would like to Fritz Niesel and Stefan Kesselheim for making possible to have Blablador to run on the WestAI machine. Without that, the Blablador would have been a much smaller dog.
Thanks also to Konstantin Rushschanskii, who has been doing magic with kubernetes and who improved model reliability a lot on the weirdest times of the day.
Thanks to Tim Kreuzer for the patience he’s been showing us with our random and absurd questions.
Thanks to the sysadmins of Jureca and Julich cloud, especially Sebastian Achilles. You guys are the best.
Right now, the only models running on the new hardware are GPT-OSS, a personal favorite of many, and Minimax 2.1, which in my opinion is the best model we have right now.
Let’s bark this weekend!
Alex, Blablador Honcho <Blablador.png>
Dear Alexandre, great news that Blablador now has its own dedicated server. But it seems that "alias-large" is not available at the moment? Is this just temporary? Best Kai -----Ursprüngliche Nachricht----- Von: Strube, Alexandre <a.strube@fz-juelich.de> Gesendet: Sonntag, 1. Februar 2026 13:52 An: blablador-news <blablador-news@fz-juelich.de> Betreff: [Blablador-news] Re: GOOD NEWS EVERYONE And a second special thanks to Stefan Kesselheim, who procured the new GPUs! Without him, we would be in a pickle! Alex
On 1. Feb 2026, at 12:50, Strube, Alexandre <a.strube@fz-juelich.de> wrote:
Finally, after a long time, we got some new GPUS.
This allows us to not need to count with the litany of scripts we’ve been using to get some nodes on the JURECA-WestAI supercomputer.
What does that entail:
- Stability should to up, a lot: by having the models running on nodes allocated only for Blablador 24/7, availability and stability will go up a lot. No more models failing to launch because the machine is too busy.
- Performance improvements: by removing the middle-man of proxy jumps and other dark magic I’ve been doing, things should be smoother. This means more tokens for us!
- More models: we want to use the new nodes for the latest and greatest models, but now one can use the available time on JURECA-WestAI to run gigantic models experimentally, suck as Kimi-K2.5, for example. As this would be done in a experimental way as Blablador always did, models will come and go according to needs, requests and voices in my head.
I would like to Fritz Niesel and Stefan Kesselheim for making possible to have Blablador to run on the WestAI machine. Without that, the Blablador would have been a much smaller dog.
Thanks also to Konstantin Rushschanskii, who has been doing magic with kubernetes and who improved model reliability a lot on the weirdest times of the day.
Thanks to Tim Kreuzer for the patience he’s been showing us with our random and absurd questions.
Thanks to the sysadmins of Jureca and Julich cloud, especially Sebastian Achilles. You guys are the best.
Right now, the only models running on the new hardware are GPT-OSS, a personal favorite of many, and Minimax 2.1, which in my opinion is the best model we have right now.
Let’s bark this weekend!
Alex, Blablador Honcho <Blablador.png>
participants (2)
-
Kai Dröge -
Strube, Alexandre