drop by and say something nice
Nov. 30th, 2025 09:07 am✨ holiday love meme 2025 ✨
my thread is here
or just comment on this post if that's more your style
There's really no such thing as a "typical" wedding cake anymore.
So today, we're going to give in to our dark sides a little.
We have to start with classic black, right?
(By Hey There Cupcake, California)
Stunning, hand-painted black.
Of course, there are a lot of dark choices beyond black. How about this gorgeous teal number?
(By Have+Some+Cake, United Kingdom)
The rich color, offset tier, and hand-painting really put this one over the top.
Or maybe you'd prefer a forest that isn't at all forbidding.
(By Immaculate Confections, United Kingdom)
In fact, I'd call it enchanting.
This red cake was inspired by Melisandre, the Red Priestess from "Game of Thrones."
(By Candytuft Cakes, Ireland)
It doesn't need to cast a glamour to be beautiful. Wow.
Then there are the times you just want to burst onto the scene and yell, "Ta da!"
(By Kuchen Diva, Switzerland)
Ta da!
The "origami" is edible wafer paper. So clever.
This purple cake isn't exactly a shrinking violet:
(By Dolce Lusso Cakes, United Kingdom)
Those are handmade sugar orchids; I like how the gold leaf really makes them pop.
And look at all the different textures on this stunner:
(By Foxtail Bakeshop, Oregon)
Quick. Somebody knit me this cake!
The baker went for a crumpled metal effect on this steampunk-inspired cake, very funky cool:
(By Sylwia Sobiegraj The Cake Designer, Ireland)
Plus it took me a second to realize only two of the roses are sculpted; the middle one is hand-painted.
Proving yet again that steampunk doesn't have to be brown!
Not that there's anything wrong with brown, of course...
(By Cove Cake Design, Ireland)
Mmm. Do you think that's chocolate? I think it's chocolate. Does anyone have a fork so I can check? And maybe some milk?
But I digress...
Let's end with a splash of deep, dark color:
(By The Cocoa Cakery, Canada)
I think I'm in love.
These cakes certainly prove there's no reason to be afraid of the dark.
Isn't that Sweet?
*****
And from my other blog, Epbot:

Via Charlie Marshall, who writes:
I took this photo at the British Wildlife Centre back in March. In this photo the otter was bursting out of the water with great enthusiasm and I was trying to keep up with him with my camera lens.



The terrible hyphenation one can reasonably attribute to a failure to invest in subject specialist proof readers (or possibly any proof readers at all, good grief).
The wildly ahistorical nonsense about the history of medicine? Less so. I begin to understand why there isn't a references section, and I've only made it as far as page 7 before needing to stop and shriek about it and also stare at a wall for a bit...
I suppose it's remotely possible that there's someone with a similar name to mine for whom this would be a relevant conference:
The ITISE 2026 (12th International conference on Time Series and Forecasting) seeks to provide a discussion forum for scientists, engineers, educators and students about the latest ideas and realizations in the foundations, theory, models and applications for interdisciplinary and multidisciplinary research encompassing disciplines of mathematics, econometric, statistics, forecaster, computer science, etc in the field of time series analysis and forecasting.
***
I have discovered a new 'offputting phrase that, found in blurb, causes you to put the book down as if radioactive': 'this gargantuan work of supernatural existentialism' - even without the name of the author - Karl Ove Knausgård - who has apparently moved on from interminable autofiction to interminable this.
***
A certain Mr JJ, that purports to be an Art Critick, on long history of artistic rivalries (between Bloke Artists, natch):
Shunning competition makes the Turner Prize feel pointless. It may be why there are no more art heroes any more.
Artistic competition goes to the essence of critical discrimination. TS Eliot said someone who liked all poetry would be very dull to talk to about poetry. Double header exhibitions that rake up old rivalries are not shallow, but help us all be critics and understand that loving means choosing. If you come out of Turner and Constable admiring both artists equally, you probably haven’t truly felt either. And if you prefer Constable, it’s pistols at dawn.
***
I rather loved this by Lucy Mangan, and will be adopting the term 'frothers' forthwith:
I like to grab a cup of warm cider and settle down with as many gift guides as I can and enjoy the rage they fuel among people who have misunderstood what many might feel was the fairly simple concept of gift guides entirely. I am particularly fond of people who look at a list headed, say, “Stocking stuffers for under £50” and respond by commenting on how £50 is a ridiculous amount of money to be spending on a stocking stuffer. They are closely followed in my pantheon of greats by those who see something like “25 affordable luxuries for loved ones” and can only type “Affordable BY WHOM?!?!” before falling to the ground in a paroxysm of ill-founded self-righteousness. On and on it goes. I love it. Never change, frothers. You are the gift that keeps on giving.
Further to that expose of freebirthers, A concerned NHS midwife responds to an article about the Free Birth Society
Photo by Teagan Dumont, via Mark Dumont
Abstract: Humans learn language from less than 100 million words. Today’s state-of-the-art language models are exposed to trillions of words. What do today’s human-scale language models learn—and what don’t they? How can we close this gap in data efficiency? In this talk, I will start by presenting insights from 3 years of the BabyLM Challenge. The purpose of BabyLM is to encourage researchers to train language models using only as much data as a human would need when first learning language, and to democratize access to language modeling research. Participants have submitted a wide variety of systems; the most highly performing systems tend to come from innovations to the architecture of training objective. Then, I will present recent work on the training dynamics of both human-scale and large-scale language models. I will present a method for understanding what concepts a model is learning at specific points in training. Using subject-verb agreement as a case study, I will show that simpler word-matching features are learned early in training, while more abstract grammatical number detectors—including more abstract cross-linguistic number features—are learned far later in training. I will conclude by discussing the future of BabyLM, and the future of interpretability as a tool for understanding—and improving—language model training.
Bio: Aaron Mueller is an Assistant Professor (Lecturer) of Computer Science (Informatics) and, by courtesy, of Data Science at Boston University. His research centers on developing language modeling methods and evaluations inspired by causal and linguistic principles, and applying these to precisely control and improve the generalization of computational models of language. He completed his Ph.D. at Johns Hopkins University. His work has been published in ML and NLP venues (such as ICML , ACL, and EMNLP ) and has won awards at TMLR and ACL . He is a recurring organizer of the BlackboxNLP and BabyLM workshops.