The Dark Matter Crisis (original) (raw)

Below is the second guest post by a young scientist who had dreamed a dream but left academia as a consequence of the experiences whilst in it after finishing the PhD degree in cosmology at a renowned institution. The main guest post can be read in DMC101, while here the young scientist provides some advice.


Begin Prologue by Pavel Kroupa:

The young scientist prefers to stay anonymous and wrote to me: “_I have written below a possible companion blogpost to the previous which contains a list of very helpful resources which I am hoping would make a good selection to recommend to your readers under the title “Advice to a Young Scientist”. A suitable picture would be “Starry Night” by Van Gogh. Whilst off the main topic of your blog, I am hoping this compendium of books would be of use to many of your readers. Certainly I have found them of great use to me. I have written a short 1-paragraph intro to the list._“

A painting of a scene at night with 10 swirly stars, Venus, and a bright yellow crescent Moon. In the background are hills, and in the foreground a cypress tree and houses.

The Starry Night by Vincent van Gogh (1889, Wikimedia Commons).

End Prologue


The young scientist writes:

I told the story of my unfortunate experiences during my PhD and afterwards, and how I eventually got over them in a previous blogpost (DMC100(to be inserted)). On a more positive note, whilst I didn’t get the PhD supervision that I hoped for, I made up for it by studying an extremely wide selection of books from all manner of genres in order to improve my science, my thinking, and my general mental/emotional/psychological skills. I have given a compendium here (with links to Amazon) in the hope that it is helpful to other scientists, or indeed people from any walk of life:

How to Solve it – Poyla

Advice to a Young Scientist – Peter Medawar

A Mathematician’s Apology – G.H. Hardy

Letters to a Young Mathematician – Ian Stewart

Mindset – Carol Dweck

The Art of Learning – Josh Waitzkin

My Search for Ramanjuan – Ken Ono

Love & Math – Edward Frenkel

Hare Brain Tortoise Mind – Guy Claxton

Creativity – Mihalyi Csikszentmihalyi

Flow – Mihalyi Csikszentmihalyi

Flow in Sports – Susan Jackson and Mihalyi Csikszentmihalyi

In Pursuit of Excellence – Terry Orlick

The Art of Nonfiction – Ayn Rand

A Life of One’s Own – Marion Milner

Staying Sane – Raj Persaud

Athena Unbound – The Advancement of Women in Science and Technology

Art and Fear – David Bayles and Ted Orland

The Artist’s Way – Julia Cameron

Flourish, Authentic Happiness and Learned Optimism – Martin Seligman

Resilience – Southwick, Charney and DePierro

Embracing Your Potential – Terry Orlick

Touching the Void – Joe Simpson

Counting From Infinity – Yitang Zhang Documentary

Self-Help for your Nerves – Claire Weekes

An Intimate History of Humanity – Theodore Zeldin

Gold in the Water – P.H.Mullen

Toward a Psychology of Being – Abraham Maslow

The Farther Reaches of Human Nature – Abraham Maslow

Religions Values and Peak Experiences – Abraham Maslow

Natural Obsessions – Natalie Angier

Notebooks of the Mind: Explorations of Thinking – Vera John-Steiner

Out of the Labyrinth – Robert Kaplan and Ellen Kaplan

How Children Fail – John Holt

The Cathedral and the Bazaar – Eric Raymond

The Road to Reality – Roger Penrose

Fashion, Faith and Fantasy in the New Physics of the Universe – Roger Penrose

Influence – Robert Cialdini

An Unquiet Mind – Kay Refield Jamison

The Varieties of Scientific Experience – Carl Sagan

Solitude – Anthony Storr

Use Your Head – Tony Buzan

Disciplined Minds – Jeff Schmidt

On the Shortness of Life – Seneca

The Creative Habit – Twyla Tharp

The Fountainhead – Ayn Rand

Atlas Shrugged – Ayn Rand

A Ph.D. Is Not Enough – Peter J. Feibelman

:59 Seconds – Richard Wiseman

A Book of Secrets – Derren Brown

Thinking, Fast and Slow – Daniel Kahneman


In The Dark Matter Crisis by Elena Asencio and Pavel Kroupa. Pavel Kroupa takes full responsibility for this post. A listing of contents of all contributions is available here.

We had a recent case where a submitted comment to The Dark Matter Crisis did not appear in the system, the comment being swallowed. The user had to use a different browser to submit the comment which we then approved. In case you submit a comment and it does not appear, try another browser.

Below is a guest post by a young scientist who had dreamed a dream but left academia as a consequence of the experiences whilst in it after finishing the PhD degree in cosmology at a renowned institution. The prologue sets this into context. In the accompanying post DMC102, “Advice to a Young Scientist”, this young scientist provides a list of resources that proved to be helpful in their ordeal.


Begin Prologue by Pavel Kroupa:

The young scientist wants to remain anonymous, but was working, as a PhD student and young postdoc, in one of the leading and pre-eminent cosmology groups. It can be argued for it to be important to allow this person to air their painful and frustrating experience as it echos that of many others. The problems experienced, which led to the person leaving academia, are in part due to a young very talented and inquisitive person being confronted with a (cosmological) model which does not agree with the data, but the important people in the research group enforcing this model to be the only one worthy to be studied. This led the young person to ask questions not liked, and thus to an increasing ostracization by the group. The picture below (chosen for this post by the young scientist in question) might reflect the situation in many (cosmology-related) research groups. Group members sub-serviently following in each other’s steps without leaving the confines of the circle, i.e, the confines of the allowed thoughts as dictated by those in charge of the money. Keep circling, like the others, and you may have a career. Leave the circle, and you’ve had it.

Prisoners round, after Gustav Dore (Wikimedia Commons). Artist: Vincent van Gogh.

Some of the scandalous occurrences and stories from various academic institutions, made public over the past decade, are well known (a few in astronomy: case 1, case2, case3, case4, case5). I have my own experience — alone after publishing as a young postdoc, in 1997 “_Dwarf spheroidal galaxies without dark matter_“, I was told on multiple occasions by senior scientists that I would not stand a chance to be hired. After all, everyone knew and knows dark matter exists and dominates the matter content in all galaxies and of the Universe. In this publication in 1997 I reported the discovery of new stellar-dynamical solutions in which the phase space distribution function of the remaining stars in an ancient satellite galaxy is highly distorted due to the repeated tidal removal of stars from certain regions in phase space as this satellite galaxy orbits the Milky Way. The observer ends up seeing an over-density in the sky that resembles an observed dwarf-spheroidal satellite galaxy closely which appears seemingly to be full of dark matter, but the models had no dark matter. I was thrilled with this discovery and thought the community would also be interested. But the above reported reaction surprised and disappointed me. For the first time I started seeing that there is something not quite right with the people who rely on the existence of dark matter. Later we explored these solutions further with a student from Germany (Metz & Kroupa 2007) and from Colombia (Casas et al. 2012), and much still remains to be done. I think these stellar-dynamical results and the dissonance between what my peers and superiors wanted me to see versus what I saw in the computer and on the sky are what started to nudge me into the direction of considering non-Newtonian gravitation models — I could not see how Newtonian models with dark matter could provide natural solutions to the satellite galaxies, not even mentioning the at present still not solved core/cusp problem. The observational fact that these same satellite galaxies are arranged around our Galaxy like a planetary system in a relatively thin disk spanning 100s of kpc, with such disks of satellite systems around other galaxies being the rule rather than the exception, further cemented this dissonance (Kroupa et al. 2005; Pawlowski 2018; Bilek et al. 2021; Pawlowski 2024). These systematically rotating disk-like structures of dwarf galaxies — see this remarkable case here: “A co-rotating gas and satellite structure around the interacting galaxy pair NGC 4490/85” — are completely in contradiction with structure formation in the standard dark-matter-based cosmological model but arise naturally in galaxy–galaxy encounters in which tidal tails full of gas and possibly some stars from the mother galaxies align, clumping in regions of them to make new little galaxies, the tidal dwarf galaxies. This works easily, but only if there is no dark matter (Bilek et al. 2018; Banik et al. 2022).

Three years before the above publication in 1997, a major Max-Planck directorship had been filled by a dark matter cosmologist wielding an incredible amount of guaranteed resources and power that goes along with that until the rest of the director’s active life plus extensions, some 30-40 years. The whole community was on a pure dark matter trip. I heard a highly-ranking colleague of mine refer to someone else I knew and held in high esteem as “a theoretician who had been good until they started working on MOND” — I could see how the reputation of colleagues was being damaged by spreading an opinion merely because they touched on MOND. I witnessed a senior astrophysicist at a conference in USAmerica jump up from the lunch table crying out “this is crap” while storming away when someone dared to mention MOND (was it me? I cannot remember, but I remember the violent storm-out). I heard on multiple occasions senior scientists from ivy league places explaining to me that “You can write a paper on MOND but only to show it is wrong.” Most young people cave-in and stay in the circle. A few of us did not heed this (well-meaning?) advice discovering instead that the real Universe actually makes much more sense in a MOND framework and that it makes no sense at all in a dark matter framework. But some wanted to please those who matter. The many “good child” attempts claiming MOND is falsified have all been rebutted (DMC70). One could, if one thought one wanted to, refer to this list of 21 false claims (last update 22.06.2022) as a list of shame comprised mostly of useless science that imposed unnecessary costs onto the taxpayer.

While not being in one of the by-then rapidly growing numerical dark matter cosmology groups filled with aspiring very bright young scientists, I had dared to leave the circle. And as a consequence, my later experiences on my job (I was hired in 2004, thanks to a few older people and in particular Prof. Dr. Klaas de Boer [1941-2022] who still had a memory of the pre-dark-matter times) was rather unfortunately dominated by having left the circle. The German-specific extremely pronounced and lived hierarchy was used against me, while the German-specific law of freedom of research and teaching was on my side, as was total job-security: my friendly colleagues were not able to get rid of me. Hierarchy/social status (however achieved) remains pre-dominant in 21st-century Europe. Support from the nuclear physics direction was however helpful and some sympathy from parts of the higher echelons of the University also had a positive effect. It was an interesting and fairly viscous battle, certainly demanding psychologically, taking a good decade. It began in earnest in 2010 with the publication of “Local-Group tests of dark-matter concordance cosmology. Towards a new paradigm for structure formation” and reached a broad maximum in 2012-2015 when these papers were published: “The Dark Matter Crisis: Falsification of the Current Standard Model of Cosmology“, “The Failures of the Standard Model of Cosmology Require a New Paradigm” and “Galaxies as simple dynamical systems: observational data disfavor dark matter and stochastic star formation“. I wrote two of the above papers in 2012 and 2015 as sole author because I did not want to drag anyone into it. During this time there were many not so kind sessions; I was also asked a few times why I don’t just leave. A workable agreement was finally put to paper in 2023 allowing me independence and peace. Will it last? On this matter I note this 2023 publication: “The many tensions with dark-matter based models and implications on the nature of the Universe“.

Do I regret having left the circle? No, I do not regret having left the circle. I would leave the circle any time again, if the observational data tell me to leave the circle. But this is truly easy to write with job security guaranteed, but in 1997 I did not have that and still moved against the main stream, not wantingly, but the data needed me to do so. Maybe I was just naive, thinking that scientific results rather than opinion and belief are what matter. When I was on a visiting fellowship at Harvard’s CfA, the one and only Charlie Lada build me up making me understand that one needs to persevere. It was during this visit in 1999 that I heard a most influential (for me) colloquium held by Stacy McGaugh at the CfA. He came for a visit and the colloquium, as I remember it, was an apology to the audience that he could not disprove MOND with all his data instead indicating it’s verification. The deep question to ask hereby is: why was leaving the circle such a rare occasion? Why did (nearly) everyone else stay in the circle despite a (supposedly) good training in the physical sciences? Admittedly, others did have an easier and better career within the University. Of course one can always argue that some are simply better scientists. ScholarGPS now allows a fair and unbiased assessment of this, where opinion is not relevant (I am the second most cited physicist at Bonn University). I only recently had an interesting encounter at an event in the University with a high-ranking person from the computer centre. Upon recognising my name, the person exclaimed having heard that I had misunderstood it all. I did not ask whom this had come from but asked some questions on quantum computing. “Having misunderstood it all” is an interesting opinion-based statement (the higher hierarchy talks amongst its own) in view of scholarGPS placing me amongst the top 336 of all physicists world wide (994820 persons), being the top-most scholar in all fields at Charles University and sixth at Bonn University, where I am second-ranked in physics.

My interpretation of the meanwhile not rare scandalous occurrences is that they are, apparently, human, since quite wide-spread.

The scandals are part of being human. But as humans we need to impose rules that shield others and ourselves from our own inadequacies as best possible.

A scientist (this here is about science and specifically cosmology-related science, not history in general), given too much power and financial control can (and all too often does) lose their foothold on reality and is likely to misuse their power by bullying younger scientists into submission concerning permitted thinking, and of course there is also the ever-surfacing sexual and even racial dimension too. The owner of resources (typically a leader in the field) is under pressure – even if self-exerted – to improve their own amazing amazingness in the field — e.g. by wanting to get the next major prize or hugely huge grant — and thus may push team members under their financial dependency too hard (e.g. by expecting team members at night meetings or on the weekends, putting down team members in front of the others if some work is not done or if an attempt at some calculation failed).

Today the situation is dire planet-wide: wherever one looks, at all the countless universities, from China over Iran to the USA and South Africa: dark matter research is the only real option. How did the scientific system deteriorate into this theory-monopoly? Was it not thought that the scientific establishment self-corrects? How can nearly the whole scientific establishment become so disregarding of the observational evidence (2012, 2015, 2023, DMC99) and so vehemently discarding of Milgromian dynamics? I have myself many times, and also in the very present days, been told by those superior and in charge, that MOND is not a theory and that it does not work. They like to say

“If so few of the scientists doubt the presence of dark matter, how can it be that it is not there? If the majority of researchers reject MOND, then MOND cannot be relevant. If a Nobel Prize was awarded for discovering the acceleratedly expanding Universe, then how can it not be in this state?”

So said by individuals of importance, who I know that they do not know the evidence! But they are system-relevant, loud and mind-invariant.

Over time and for reasons not entirely understood, but perhaps related to the want to hold onto Einstein’s exact description of how gravitation must be understood, a group of people rose to sufficient prominence during the 1990s to start dictating that the dark-matter-based models are the only ones worthy to study. This goes along by shading a leading contender for an alternative (MOND) as being inferior such that young scientists shy away from looking into MOND. An example of this is the amazing public-relations gag concerning the Bullet cluster that even today is understood (wrongly) by the majority to have damaged MOND fatally (see Dark Matter Crisis DMC5, DMC6, DMC7, DMC28, DMC82; and the very recent discussion of the Bullet Cluster in Jan Pflamm-Altenburg’s paper). Once at a conference in Durham some ten years ago I made the notable experience that after my talk no-one dared to approach me at coffee time and I felt I must be exerting some repulsing force as there was a bubble around me empty of people. In the evening though, after the dinner and when all important scientists had left, this reversed and there was a significant overdensity of young people around me asking diverse questions.

Even today nearly all scientists shy away from admitting that dark matter does not exist — the younger ones due to a keen sense of threat to their own existence — the older ones because they fear about their reputation amongst their peers in the want for the next big prize, award or grant. Thusly they all do not want to leave the circle lest they be branded “lunatics” by those that matter. I will never forget the first thing esteemed The Lord Martin Rees of Ludlow said upon me which sounded something next to identical to “Yes, I remember you and you are now working on MOND” when I met him for the first time after a long time in Qatar at an international conference on microlensing and exoplanets some dozen years ago. I had gone up to him to say hello after his incredibly beautiful lecture, never forgetting till this day his touching and great kindness in personally greeting me when I arrived as a fresh (and lost) PhD student at the IoA in Cambridge in late 1988. His immediate statement in Qatar was meant nicely with a real sense of positive recognition of my person, but the mentioning of MOND in the first sentence in association with my name came as a surprise. And it made me proud!, knowing from there onwards that I am known for something. Nearly every young person not on a permanent contract, even those I worked with, get to be petrified when needing to interpret the many fatal failures of the dark matter model of structure formation (e.g. when writing up papers on barred galaxies, disks of satellites, inhomogeneities on Gpc scales). Never have I met a young scientist ready to admit that a particular 5sigma tension between the data and dark matter model rules out dark matter. Yet the same person happily agrees that the same 5sigma threshold essentially proves the Higgs boson’s existence. People dare not say “Dark matter does not exist”, but they are happy to exclaim “The Higgs particle has been found!” They evade the dark-matter-existence-question as if it were a question relating to the existence of The Deity.

This is especially pronounced in cosmology, a topic particularly removed from the scientific method and much more prone to beliefs.

The young people enter a cosmology group as PhD students, and realise (assuming they have some IQ and a back bone) that the dark-matter based models simply do not work. And yet they are told that there is dark matter and that the cosmological simulations reflect the reality very well. A student might hear “Now don’t you worry about that, there are very clever scientists working on solving that problem and you won’t be able to contribute”, or “We are very aware of this problem since a long time and know how to solve it but are now working on more important questions that actually matter”, or “Our most recent not yet published high resolution cosmological simulations show that this problem has gone away” etc. A strong statement along the lines of this last comment was made by a then already highly-important Cosmologist in Heidelberg, who is today a Max-Planck-Director in Garching, at the end of my presentation in Heidelberg in 2014 (at time 1:10:00). He claimed that the (not yet published) suite of simulations (he would have been referring to the Illustris set) give the correct morphological mixture of galaxies. His claim is disproven, as I predicted in my reply then in 2014, by this publication “_The High Fraction of Thin Disk Galaxies Continues to Challenge ΛCDM Cosmology_” (Haslbauer et al. 2022, ApJ). This is in fact an excellent case of how a high-ranking scientist makes a broad intimidating statement broadcasting it broadly into the packed lecture hall such that younger scientists would not dare to disagree for the sake of their own job prospects. This also demonstrates how (incorrect) claims help with one’s career — imagine the effect on the career of such “broadcasters” (people making unproven claims based on opinion or belief) if they had correctly reported that the dark-matter-based structure formation simulations do not lead to the observed Universe.

Here is an example of how successful this process of subtle intimidation and even brain washing in our present “modern” era is — it leads to young astrophysicists developing an entirely wrong understanding of the real Universe, they actually end up believing that the dark-matter-based models agree with it: Many (most?) young astronomers (PhD students as well as post-doctoral researchers) come out of their education believing that elliptical, or more generally early-type, galaxies are the dominant population of galaxies. This is what the LCDM model gives in terms of the dominant population of galaxies as a consequence of galaxies forming largely through many mergers in this model (“The High Fraction of Thin Disk Galaxies Continues to Challenge ΛCDM Cosmology“). These people are taught an entirely wrong knowledge, since elliptical galaxies only contribute to the galaxy population by a few percent. Elliptical galaxies of all masses are the exception in the real Universe. And this is documented in the literature since the 1980s (“_The luminosity function of galaxies_“: Binggeli 1988; “_How was the Hubble sequence 6 Gyr ago?_“: Delgado-Serrano, Hammer et al. 2010). In the real Universe, an elliptical galaxy is like an accident on the German Autobahn: they are very rare, but spectacular, with the outcome being roundish wrecks. In the real Universe, elliptical galaxies form rapidly and early with their stars being ancient, while in the LCDM model they form over long times through many crashes and their model stars have a large range of ages causing a catastrophic disagreement with the real elliptical galaxies as shown by Robin Eappen and collaborators (“The formation of early-type galaxies through monolithic collapse of gas clouds in Milgromian gravity“). This failure of the LCDM model is so obvious and massive that I do honestly think that there is something wrong with people who rely on the existence of dark matter.

While such (bullying and brainwashing) behaviour is not excusable in the natural sciences and in the 21st century, it apparently occurs all the time. How can one reduce the incidence of these?

I think this is not difficult:

Abolish all powerful professorships and prizes – they only corrupt science ! The notion that competition for grants leads to the best ideas winning is complete nonsense in the pure sciences.

At this point my colleagues would argue: you see? This PK is the problem.

But in earnest: the scientific system needs to be modernised (spartanised) to not allow individuals who manage to ascend into some role of importance to misuse their standing. It goes against the scientific principle to have a young professor being appointed to a chair with an immense amount of funding for the rest of their research career. These people are likely to do more damage than lead to actual progress, since the resources at their disposal are locked up in their hands for decades, and this leaves little room for other scientists to do research into novel directions. Providing overly large grants leads to the same. A grant holder (e.g. of an ERC grant – a young postdoc) is seen as an Einstein, the institutions go crazy and mad to hire them, with, in the end, a young postdoc leading other young postdocs, who however also need to develop their own careers. It has happened, according to my recollection, that an offer for a permanent staff position to a brilliant researcher (with family) was retracted by an institution not far from where I am sitting now, hiring instead a youngster who suddenly came up with an ERC grant and from whom I never heard of again after taking up this position.

The whole idea of competing for grants in the pure fundamental sciences is nonsense: As an example, the then-innovative idea that spectral lines may be related to matter being made of particles with wave properties would never have stood a chance for support in the 1890s, and the problem Albert Einstein faced to get hired into faculty before 1905 is legendary. One of the reasons why we have a stagnation in physics is precisely because the system has shifted towards recognition of (supposed) talent through success in grant acquisition. This kills-off truly innovative ideas because only “new” ideas that stand a chance to pass the peer-review process stand a chance for being supported.

In this constellation, the hiring facilities forget that grant proposals are based on a well written text full of promise and rarely full of achievement.

Indeed, it has been lamented more frequently recently that physics appears to be in a crisis: there is no real progress, no break through in fundamental physics, since the past 50 years or so (e.g. “_How to escape scientific stagnation“, “Why the foundations of physics have not progressed for 40 years_“). In “Science is getting less bang for its buck” (The Atlantic, Nov.16th, 2018) we read:

While understandable, the evidence is that science has slowed enormously per dollar or hour spent. That evidence demands a large-scale institutional response. It should be a major subject in public policy, and at grant agencies and universities. Better understanding the cause of this phenomenon is important, and identifying ways to reverse it is one of the greatest opportunities to improve our future.”

In my Golden Webinar I pointed out that there seems to be a correlation:

The decrease in progress in fundamental physics appears to anti-correlate with the rise in the number of research prizes and important professorial chairs.

My interpretation of this is that the institutionalisation of major prizes and awards in the pure fundamental sciences has led to a shift away from the goal to understand how nature works (compare with Galileo’s drive and motivation) to a goal of securing not just a job, but a career with awards.

In some countries where the scientific system has only recently emerged it even suffices to publish a single Nature paper to get a professorship – how mad is that? An author of such an article has at best learned how to modulate a text into Nature format and into what Nature wants via peer-pressure. A Nature publication is certainly not a reliable documentation of an unbiased major research result. How can such a professor lead a part of an institution where new young scientists are supposed to learn the pure scientific method of hypothesis testing, data acquisition and theory development void of peer-pressure thinking?

One suggestion how to achieve a reform of the system would be to allow PhD students and postdocs control of their own money such that they can take it away and work elsewhere. Another would be to abolish prizes and the Nature Journals, since the lure of a prize or award or to get a paper accepted in Nature skews the approach a scientist might take. Awards and publications in Nature corrupt science.

Some institutions only hire people as professors who published one to two papers in Nature. This is the total calamity for scientific progress: Wanting/needing to publish in Nature produces publications that are modulated to be a selling story, rather than to objectively and rationally report a result.

After all, a scientist doing research should be doing research because they want to understand the laws of nature and discover new ones. Careerists should go to banks (albeit this might spell a financial calamity for others) or better to some other company. Scientists ought to be seeking higher goals than merely prizes.

Today, a scientist’s major research “achievement” is to obtain a major prize. With an ERC grant or the like in-pocket, the scientist is seen as a new Einstein, without having necessarily truly advanced our understanding of nature. The many ERC grants or equivalent given out by, for example, the ERC grant committees for dark-matter cosmological projects, constitute a pure destruction of tax payers money as no advance relevant to nature will be achieved.

Returning to the above-mentioned no-progress-in-fundamental-physics problem, Sabine Hossenfelder has been emphasising how particle physics has come to an essential stand still. In this “Science needs reason to be trusted” contribution (Nature Physics Commentary, 05.04.2017) Sabine Hossenfelder argues that science is in the grip of “post-factualism” with empirical evidence being subdued by increasingly complex theoretical/mathematical ideas. Given the continued lack of progress, not only in physics but also in modernising the system, this _ contribution_ even suggests to entirely defund particle physics, the argument being made that the honest work by the tax payer is misused knowingly by dishonest research by the particle physicists. This crisis in particle physics is mirrored by the dark-matter-based theoretical models that keep blossoming up in ever more complex variations (self-interacting dark matter, fuzzy dark matter, dark photons, different sorts of dark matter, time variable dark energy, different dark energy forms that kick-in at various cosmological times). While developing mathematical formulations of physical phenomena is required and is a very serious endeavour indeed that only few can and ever will be able to truly master, the problem here is that the community has narrowed this endeavour into only one allowed game genre (the standard dark-matter-based one) by creating disgust against others who are studying other genres (notably MOND). I noted the (carefully voiced) disgust in a zoom session with Indian computer scientists recently who work on galaxy classification using machine learning when I simply asked them what they thought of MOND… This is unforgivable because the MOND-approach actually immediately resolves the known problems related to galaxies, structure formation and the Hubble Tension as documented here in The Dark Matter Crisis! The currently hot-topic-Hubble-Tension is readily and easily resolved, well, it does not even occur, in a MOND-based model of the Universe as shown by Sergij Mazurenko, Indranil Banik and collaborators in this 2024 paper. Another hot-topic, currently causing the dark-matter-believing people to go into a frenzy, is the observed but not-allowed existence of massive galaxies at very-high redshifts, as demonstrated by Haslbauer et al. in this 2022 paper. While impossible in the LCDM model, a MOND-based model of cosmology readily accounts for them as argued by Stacy McGaugh and collaborators (2024). That the research programme based on MOND is so much more successful than that based on the standard dark-matter approach (which is essentially without success) is described in David Merritt’s award winning book “A Philosophical Approach to MOND” published by Cambridge University Press in 2020.

It is scandalous for this very promising avenue of exploration to be being largely shut-off through peer-pressure such that bright minds who need a career to survive are discouraged from using their brilliance to investigate the possible deeper fundamental physics underlying the MOND phenomenon. It is as if the community had massively discouraged young physicists exploring the wave nature of particles in the early 1900s such that Werner Heisenberg and Erwin Schrödinger would have been peer-pressured to work on other peer-acceptable problems.

Essentially, a possible interpretation one can apply is that the above _ contribution_ argues that fundamental physics appears to have died by its own will, the community at large (with few individual non-circle exceptions) has knowingly performed suicide as it were, and so why should the members-of-the-circle be funded? A dead fish needs no feeding. See the above _ contribution_ for the arguments that have been brought up against defunding.

While our societies appear to be undergoing major changes, with upheavals looming on the horizon that may be as grand and deep and transforming as those of the first 30year war 400 years ago and the second 30year war (WWI+WWII) 100 years ago, maybe a mere spartanisation of academia might be something to try first before giving it all up.

How can a system in which the scientific enterprise flourishes be designed which also mitigates the above-described problems associated with human nature leading to misconduct? I think this is readily possible and I hope to return to this in a future post.

End Prologue


The young scientist writes:

Let me introduce myself. I am a former LCDM theoretical cosmologist who spent an extended period (over 5 years) as student/postdoc in one of the major world LCDM groups, and had a terrible time, learning nothing except how to hate science! (When science is the love of my life). I have now thankfully found out about MONDian gravity and the community surrounding it. My old professor is still (incredibly undeservedly) a very powerful presence in the LCDM world, so for my own personal protection I don’t want to give too many identifying details. But anyway, here is my story in poetry, and my thoughts about the whole scientific and sociological situation in poetry and song!!!

The following expresses my hope upon embarking upon my PhD and my despair by the time I finished it 5 years later:

‘I dreamed a dream’ from Les Miserables

"Then I was young and unafraid And dreams were made and used and wasted There was no ransom to be paid No song unsung, no wine untasted

But the tigers come at night With their voices soft as thunder As they tear your hope apart As they turn your dream to shame

I had a dream my life would be So different from this hell I'm living So different now from what it seemed Now life has killed the dream I dreamed"

My PhD supervisors really did turn my dream into shame – in the words of Ayn Rand from the Fountainhead:

“Not selfishness, but precisely the absence of a self. Look at them. The man who cheats and lies, but preserves a respectable front. He knows himself to be dishonest, but others think he’s honest and he derives his self-respect from that, second-hand. The man who takes credit for an achievement which is not his own. He knows himself to be mediocre, but he’s great in the eyes of others.”

I simply could not buy into it all – another quote from The Fountainhead:

“To sell your soul is the easiest thing in the world. That’s what everybody does every hour of his life. If I asked you to keep your soul – would you understand why that’s much harder?.”

Stacy McGaugh speaks a lot on his blog about integrity – and I see a deep integrity in the MOND world which I frequently did not see in the CDM world. Again from Ayn Rand

“And what, incidentally, do you think integrity is? The ability not to pick a watch out of your neighbor’s pocket? No, it’s not as easy as that. If that were all, I’d say ninety-five percent of humanity were honest, upright men. Only, as you can see, they aren’t. Integrity is the ability to stand by an idea.”

In words attributed to Paul Dirac “Scientific progress is measured in units of courage, not intelligence”. I would say that there are very many who have blazed the trail in MOND who have shown a lot more courage and integrity than I did (By rights I should have jacked in my PhD and started afresh on something that made sense to me – but I had a studentship, and after one year had gone by, I was no longer eligible for funding for a new project, so I hung about morose and unhappy and a lot of PPARC funding was wasted).

I felt pretty much alone during my PhD – I did not find the ‘supervision’ very helpful – indeed it was at times a training in how NOT to do science. Again from Ayn Rand:

“It’s easy to run to others. It’s so hard to stand on one’s own record. You can fake virtue for an audience. You can’t fake it in your own eyes. Your ego is your strictest judge. They spend their lives running. It’s easier to donate a few thousand to charity and think oneself noble than to base self-respect on personal standards of personal achievement. It’s simple to seek substitutes for competence–such easy substitutes: love, charm, kindness, charity. But there is no substitute for competence.”. Unfortunately, I frequently did not even find my supervisors even competent. There were a number of occasions where they made the simplest mistakes, and I did not see any evidence from my own eyes that justified the high reputation of one of them currently of the utmost high repute in the CDM world. There were papers where they artificially generated self-citations from mis-attributing scientific prgress to former papers, and there were many times where I was rushed to ‘produce a result’ without checking it was correct.

The continual stress during my PhD (and afterwards) and without a job weighed heavily on me, and has led to 7 psychiatric hospitalizations in total. During my 6th hospitalization to recover from all of this, I wrote this poem about MOND, which expresses a lot of my thoughts perfectly:

On a Very Modular Deep Potential Keep

We begin with an ode to the odious [redacted] - The conniving "His Lordship" of drudge. His cold dark matter illogicality so protracted - Is but a sludgy crap-shoot of an academic fudge. And a faintless maiden fair here refuses to fearful budge.

The 'NFW' profile (alas be it so unvisionary) - Privates many a postdoc or student till madly insane. Dubiously benefiting fantasist men supervisionary - Stoking sloppy science - simply their careerist competitive game. The 'superiority complex' donkeys really are that inane.

Rise up and praise our emanating dawn - Modification of Newtonian ideas of gravity. Put an end to ideations of LCDM so worn - And all of that cluttered depravity. What an utter twee tapestry of travesty.

Constellations align and allow us to plough and scatter much money - Be it Great British Pounds (or a few Roman Denarius). Halt aggressive regress - progressing enlightenment sound and sunny - Respawning models yoked by we fishers of men so hilarious. A great sure day of dawning of a lengthening age of Aquarius.

The Evil Axis of the Cosmic Microwave Background - Is it but an artefactual residue of MOND? Let's create a WMAP-recreating hacker-ground - By a 'calibrating' dipole may we have been be-conned? Plumb the heart of the matter of the CMB pond.

For the truth of the fundamental - Science is a humanistic search. March we as an organization regimental - In an anarchistic broadening church. (With public marketing of the merch').

Heaven and Earth doth conspire - For MONDian perturbations about the sun. Let us make many academic hires -  And do the science at a run. (Forgive the poetic puns - what fun!).

For those disenfranchised or scientifically homeless - Make MOND gravity a home with a hearth. Yet our theory is currently tomeless! So type a textbook to illumine the path. Anointing it bodily in blackest of microwave bath.

For our metier shall we compose many papers - In a constantly-accelerating cascading heap. For Mordehai parted the watery curtains for we drapers -  A great forwards 40-year conceptual leap. Thee hast sowed what ye shall and will reap.

[The creeping realization that MOND gravity is a very modular deep potential keep -  and Heaven and Earth do weep]

During all of the hard times with my supervisors, it was incredibly hard to keep my soul. I did what I had to, but little more. Again from Ayn Rand: ” I came here to say that I do not recognize anyone’s right to one minute of my life. Nor to any part of my energy. Nor to any achievement of mine. No matter who makes the claim, how large their number or how great their need. I wished to come here and say that I am a man who does not exist for others.” – Howard Roark. My supervisors might have got me a grant for my PhD, but I do not feel they deserved my time. I spent most of my time reading self-help books or searching round on the arXiv for an alternative project (and reading extremely widely in astronomy and cosmology, which was not much use at the time, but at least kept me sane and stood me in good stead for later). I dont think I impressed my supervisors a great deal. But, in the end, in the words of Ayn Rand again, ‘ “But you see,” said Roark quietly, “I have, let’s say, sixty years to live. Most of that time will be spent working. I’ve chosen the work I want to do. If I find no joy in it, then I’m only condemning myself to sixty years of torture. And I can find the joy only if I do my work in the best way possible to me. But the best is a matter of standards—and I set my own standards.” ‘

I have quoted a lot from Ayn Rand – I don’t think much to her views on unadulterated capitalism as a way to run the economy or for the general way humans should interpersonally treat one another, but I find her philosophy correct for the way science has to and needs to work – ideas and arguments should stand on their own merits, not (to quote a book title by Roger Penrose) ‘Fashion, Faith and Fantasy’, and upon the reputations of their heavily socially-networked and highly politicised proponents. As Ayn Rand says: “Degrees of ability vary, but the basic principle remains the same: the degree of a man’s independence, initiative and personal love for his work determines his talent as a worker and his worth as a man. Independence is the only gauge of human virtue and value. What a man is and makes of himself; not what he has or hasn’t done for others. There is no substitute for personal dignity. There is no standard of personal dignity except independence.”

So I didn’t have such a good time during my PhD, and did not find a job afterwards – which would have been more of the same (which held little attraction – and I wasn’t in a position to find an independent fellowship, which was about the only kind of position I was particularly interested in). Having no job, I spent a number of years afterwards looking round all corners of astronomy and cosmology for a ‘little project’ that I could work on at home alone without my supervisors, and probably emailed around 15-20 scientists (none of whom replied) with project ideas. Finally, I came across MOND via Sabine Hossenfelder’s blog, which led me to Triton Station blog and ‘The Dark Matter Crisis’. I was hooked!, I realised that it was a highly viable theory, and that CDM was in fact already falsified, and I made contact with the MOND community, and finally found a home! Thanks to funding provided by my family (which has been the case for many years), I am now working on a project at the frontiers of cosmology, on the topic of the MOND paradigm, and have found people interested in my ideas who I can get advice from! I would certainly advise this course of action (i.e. contacting the MOND community) to any dissatisfied LCDM researchers in a similiar position to the position I was in.

However much I was unable to find a job after I had finished my PhD, and however long it was before I was able to find potential collaborators, whilst I regret choosing the PhD institution that I did (and I would recommend that aspiring graduate students pay far better attention to these matters than I did) I don’t regret not taking the advice of my ‘supervisors’ – in the words of Edith Piaf – “Non, je ne regrette rien

Never, ever, ever, sell your soul! Remember the words of the poem “The Man in the Glass” by Peter Dale Winbrow Sr.:-

When you get what you want in your struggle for self And the world makes you king for a day Just go to the mirror and look at yourself And see what that man has to say.

For it isn’t your father, or mother, or wife  Whose judgment upon you must pass The fellow whose verdict counts most in your life Is the one staring back from the glass.

He’s the fellow to please – never mind all the rest For he’s with you, clear to the end And you’ve passed your most difficult, dangerous test If the man in the glass is your friend.

You may fool the whole world down the pathway of years And get pats on the back as you pass But your final reward will be heartache and tears If you’ve cheated the man in the glass. 

I might look like a failure to the world, but at least I can look myself in the glass.


In The Dark Matter Crisis by Elena Asencio and Pavel Kroupa. Pavel Kroupa takes full responsibility for this post. A listing of contents of all contributions is available here.

We had a recent case where a submitted comment to The Dark Matter Crisis did not appear in the system, the comment being swallowed. The user had to use a different browser to submit the comment which we then approved. In case you submit a comment and it does not appear, try another browser.

Foreword by Pavel Kroupa: With this guest post, Dr. Indranil Banik announces a parallel session on the Hubble tension he is co-organising for the UK National Astronomy Meeting to be taking place this year in Durham. He also points out the special issue in the journal Galaxies he is a guest editor for on this same topic. Attend the meeting and submit a paper if you are a young (or older) researcher working on this topic.

Guest post by Dr. Indranil Banik.

We are organising a parallel session on the Hubble tension at the UK National Astronomy Meeting 2025 in Durham. The other organisers are Harry Desmond (ICG Portsmouth), Eleonora di Valentino (Sheffield), and Tom Shanks (Durham) . If you are not already familiar with this pressing issue for cosmology, it is explained in the first two sections of a previous post. As a quick recap, the present expansion rate of the Universe (_H_0) can be predicted using the Lambda-Cold Dark Matter (ΛCDM) standard cosmological model using the observed anisotropies in the cosmic microwave background (CMB) from the infant Universe. This prediction can be checked against the local redshift gradient, how quickly the redshift rises with distance in the nearby Universe. This increase arises because photons expand in an expanding universe — the longer they have been travelling, the longer they become. The prediction of _H_0 from ΛCDM calibrated to the CMB anisotropies falls significantly short of the value inferred from the local redshift gradient (multiplied by the speed of light), which excitingly is about 9% larger than expected (e.g., Uddin et al. 2024).

To bring together members of the cosmological and astrophysical community in order to discuss the Hubble tension and implications for cosmology.

Figure 1: Graphic illustrating our parallel session at NAM2025 on the Hubble tension. See if you can understand what the different features mean and why they are important to the Hubble tension!

National Astronomy Meetings are major events in the British astronomical calendar, with typically considerable media coverage. A wide range of topics related to the Hubble tension are within the scope of our parallel session, as illustrated in Figure 1. On the observational side, we certainly want to hear about measurements of distances in the local Universe related to the measurement of the local redshift gradient. On the theoretical side, we are interested in proposals to solve the Hubble tension through new physics that leaves its mark at any stage of cosmic history. Obviously if you are proposing a model, you should be able to explain why it can fit the CMB anisotropies consistently with the local redshift gradient, ideally demonstrating this using detailed published calculations. A conference provides a good platform to discuss the advantages and disadvantages of your model. If it has extra parameters beyond ΛCDM, please also try to weigh up whether the extra theoretical flexibility justifies the improved fit to data that you will inevitably achieve. Models at an early stage of development that cannot yet be confronted with the latest cosmological data are not well suited for this conference.

We are particularly keen to encourage early career researchers to present their work. The scientific value of each talk will be our primary consideration, but we will be operating slightly more relaxed rules for early career researchers. For instance, contributions should normally be based on peer-reviewed studies, but in this case it should be sufficient if your work is not published but follows standard techniques, or if it is really quite novel but has been resubmitted in response to referee comments that allow resubmission. In any case, your work should be on arXiv if you want to talk about it, demonstrating your commitment to your results and allowing others to read them in advance of the conference. The peer review process is of fundamental importance to science and will guide our selection of talks. We will have flexibility to adjust talk durations if appropriate. Feel free to get in touch with us to enquire about how suitable your contribution might be. Note that unlike the MOND conferences in 2019 and 2023, posters are not very effective at the British National Astronomy Meetings due to the large number of posters, but these can still be helpful for works at a very early stage.

Some of us are guest editors of a special issue to be published by the journal Galaxies on the Hubble tension. Please get in touch if you are interested in contributing. It is meant for the latest updates on the Hubble tension, both theoretically and observationally. Reviews are particularly welcome — Galaxies is well suited for reviews because other mainstream astrophysics journals often do not do them. Just as an example, the special issue would be well suited to an article on cosmic chronometers and how these can help to set an absolute timescale to the cosmic expansion history, highlighting relevant literature and showing the latest evidence. Articles rebutting previously proposed models are also welcome, since it is possible that the latest evidence disfavours ideas that were once viable. Please get in touch if you would like to contribute, then we can discuss how suitable your proposed article would be and we can obtain waivers for you to avoid paying article processing charges. We also expect to include a summary of discussions at our parallel session as part of this special issue, most likely as an editorial.


In The Dark Matter Crisis by Elena Asencio and Pavel Kroupa. A listing of contents of all contributions is available here.

We had a recent case where a submitted comment to The Dark Matter Crisis did not appear in the system, the comment being swallowed. The user had to use a different browser to submit the comment which we then approved. In case you submit a comment and it does not appear, try another browser.

On January 14th, 2025, I held a Bonn-History-and-Philosophy-of-Physics-Research Seminar (for link see below) in the Lichtenberg Group for History and Philosophy of Physics at the University of Bonn with the title “_The dark-matter-free universe: the application of Chandrasekhar dynamical friction to test for dark matter particles and the consequences for fundamental physics thereof_“.

The aim of this presentation, which I recorded after giving the above seminar, is to provide an explanation to students and other scientists as to how Chandrasekhar dynamical friction works and how this process leads to orbital-energy dissipation of interacting galaxies.

The existence of dark matter particles that gravitate but do not significantly take part in the three fundamental forces of the standard model of particle physics (which mathematically describes the “normal matter”) is fundamentally the basis, the central pillar, of the currently standard model of cosmology (the SMoC, the other two pillars being inflation and dark energy). Dark matter leads to the growth of structure such that galaxy-like objects can exist in the model and to the phenomenon of these model galaxies merging rather than just flying past each other! Without dark matter galaxies cannot form in the SMoC, and without dark matter they merge rarely.

Without the dark matter particles the SMoC collapses and is invalid (e.g. Kroupa 2012; Kroupa 2023; Kroupa et al. 2023). Many times have I heard, even from non-LCDM people, that one cannot falsify dark matter because it is unknown what it is and that it therefore can have all sorts of properties. This is, excuse my wording, non-scientific gibberish (non-scientific because a “theory” that cannot be tested and be falsified is not logical verifiable science; gibberish, because it makes no sense): the current SMoC, the much-believed-to-be-correct LCDM model of cosmology, is valid if and only if the “SMoC-type dark matter” is made up of gravitationally active particles that do not significantly interact via the three fundamental forces with normal matter particles and that have de Broglie wavelengths of at most a few tens of pc in order to account for the dark matter in ultra-faint dwarf satellite galaxies. As explained in the presentation below, the “SMoC-type dark matter” implies Chandrasekhar dynamical friction in regions around galaxies that are well beyond their observable extends, and without it and the implied Chandrasekhar dynamical friction standard (Einsteinian/Newtonian) gravitation cannot form galactic-scale structures.

The Winnie-the-Pooh test (DMC90 and DMC91, i.e. the Chandrasekhar dynamical friction test) applies precisely to these particles that are needed if Einsteinian/Newtonian gravitation is to remain valid. If the dark matter has “other unknown properties”, then it is at the core of a different cosmological model in which we have no clue how structures form if they form at all, remembering that at the onset of structure formation the model universe must be flat, homogeneous and isotropic to fulfil the CMB constraints, i.e., essentially without structure at a redshift near 1100.

Applying the well known and well understood dynamical process of Chandrasekhar dynamical friction to well-observed systems allows us to test for the existence of dark matter particles (Kroupa 2015). The test has been applied to different systems (satellite galaxies, galactic bars, groups of galaxies). It has conclusively and compellingly excluded the existence of dark matter particles. This falsification of their existence by the Chandrasekhar dynamical friction test was already established by 2015. This falsification has been confirmed over and over again by now: dark matter particles do not exist. This explains why no dark matter has ever been found in any accelerator, laboratory or indirect experiment. “_Dark Matter Escaping Direct Detection Runs into Higgs Mass Hierarchy Problem_” was just published by Bharadwaj et al. (2024) affirming the very grave problems dark matter scientists are facing.

But why did the scientific establishment largely ignore the above 2015 paper? Why was this quite simple test, which can be repeated by any student of astronomy, not been taken seriously? I guess it was simply not allowed to be taken seriously. After all, Nobel Prizes had to be handed out, and millions of dollars and euro are at stake for certain people. Is this one of the reasons why attempts to do significant research on alternatives, such as MOND (Scholarpedia article; Wikipedia article; see also Merritt’s award-winning book on MOND), are discouraged and even discredited, especially in the USA? Can one see all of this as scientific misconduct on a vast scale? On Triton Station we read: “_Many people in the field hate MOND, often with an irrational intensity that has the texture of religion. It’s not as if I woke up one morning and decided to like MOND – sometimes I wish I had never heard of it – but disliking a theory doesn’t make it wrong, and ignoring it doesn’t make it go away. MOND and only MOND predicted the observed RAR a priori. So far, MOND and only MOND provides a satisfactory explanation of thereof. We might not like it, but there it is in the data. We’re not going to progress until we get over our fear of MOND and cope with it. Imagining that it will somehow fall out of simulations with just the right baryonic feedback prescription is a form of magical thinking, not science._” (Stacy McGaugh, 12.08.2024).

I refer to the Chandrasekhar dynamical friction test as the “Winnie-the-Pooh test for dark matter” (DMC90 and DMC91) based on a story in which Winnie the Pooh uncovers that a famous huge jar of hunny being supposedly filled with precious invisible hunny is a scam: the jar is empty. As a consequence Winnie the Pooh is expelled from the village because the scam, if made public, would destroy the economic model the village lives off, namely the tourists who come and spend money to see “The Jar Full of Invisible Hunny“. In this same sense, with dark matter non existing, much of current physics economy would collapse as it is based on an endless repetition of grant proposals to use much tax payer’s money to keep searching for it and to keep attempting to solve all the problems it generates in computer models of galaxies. Entire scientific careers, in fact entire research institutions and major professorships, are built on this economy, with prizes and awards. Given that dark matter does not exist (by the negative outcome of the Winnie-the-Pooh test) the scientific community needs to question how all of this activity is consistent with the task scientists have been charged with by the tax payer, namely to advance our understanding of how nature works. This crisis is accentuated (i) by the standard model of particle physics not implying any not-yet-discovered particles and (ii) by all known theoretical extensions of this model that were hoped to account for dark matter particles having been largely excluded by collider experiments as just underlined by dark matter escaping direct detection runs into Higgs mass hierarchy problem where the authors re-stress (iii) the lack or detections of anything that looks like a dark matter particle.

Note the consistency of it all: The Winnie-the-Pooh test for dark matter conclusively tells us there is no dark matter. It provides compelling evidence that dark matter does not exist. Direct and indirect searches have also not been finding evidence for the existence of dark matter particles despite 40 years of search. No theoretical particle physics model of dark matter particles survives the experimental tests. 

Title of the seminar:
The dark-matter-free universe: the application of Chandrasekhar dynamical friction to test for dark matter particles and the consequences for fundamental physics thereof.

Abstract:

The current standard model of cosmology (SMoC) is based on the extrapolation by many orders of magnitude of Newtonian/Einsteinian gravitation from its realm of empirical extraction prior to 1916 (the Solar System) to galaxies and the whole Universe. In view of much more recent observational data (post 1970s) on galaxies, on structure formation and on the radiation background, this extrapolation requires the extension of the standard model of particle physics (SMoPP) to accommodate new exotic dark matter particles (DMPs) that interact gravitationally but negligibly with standard matter via the three fundamental forces. The SMoC further requires the introduction of inflation and dark energy to account for crucial observations. Although very widely accepted and believed to near-totalitarianism, none of these three auxiliary hypotheses has a well-founded physical understanding and independent experimental verification apart from the data used for their construction. Thus, the many world-wide searches over the past 40 years for the putative DMPs have cost much but yielded nothing. This does not disprove the existence of DMPs. However, nearly every new major observation has contradicted SMoC predictions (e.g. JWST high-redshift observations, the Hubble Tension), the SMoC in actuality being an impressive theory of physics in terms of a well documented track record of complete failures while still being upheld as a great success story.

With this presentation I will explain a simple test for the existence of DMPs, namely the "Winnie-the-Pooh Test for dark matter". This is based on the undergraduate (I would even write Kindergarten-level)-taught Chandrasekhar dynamical friction process. I will explain this process and apply it to the triple galaxy system composed of the Small Magellanic Cloud (SMC), the Large Magellanic Cloud (LMC) and the Milky Way (MW). The modern exquisite motion, position and star-formation-history data completely (i.e. with more than five sigma confidence) falsify the existence of any sort of DMPs. In the presence of DMPs this triple system would have had the SMC and LMC merge Gyrs ago. Tests based on Chandrasekhar dynamical friction applied to galactic bars that stir-up the halos of DMPs (like spoons in a coffee), to the nearby galaxy group M81 and to MW satellite galaxies independently confirm this result. With DMPs being excluded to exist with complete and utter certainty (explaining the above search-null-results), the entire SMoC breaks down, because Einsteinian gravitation alone plus inflation plus dark energy do not account for structure growth and galaxies. Instead, the observed dynamics of open star clusters, of galaxies and groups of galaxies as well as of clusters of galaxies behave as if the gravitational potential were made by DMPs, but without the DMPs. This is of course MOND, about which Prof. Dr. Stacy McGaugh from the Case Western Reserve University (USA) writes "We might not like it, but there it is in the data. We're not going to progress until we get over our fear of MOND and cope with it.

Time permitting, I may touch on the more conservative (in the sense of "holding traditional values") approach to cosmology which rests in a first step on not changing the SMoPP significantly or at all  beyond it's current formulation (in view of no evidence whatsoever for the existence of additional non-standard particles), avoiding the many-orders-of-magnitude-gravitational-extrapolation but allowing gravitation to be generalised to incorporate beyond-Solar-System constraints a la MOND. This approach is currently actively being studied in Bonn (by Nils Wittenburg, Ingo Thies, Jan Pflamm-Altenburg and others), in Prague (by Nikolaos Samaras) and in Nanjing (by Eda Gjergo) and is composed of different branches. One such branch, the "Bohemian Model of Cosmology" (BMoC) is ultra-conservative and yet post-modern by naturally avoiding inflation and dark energy and DMPs and forms galaxies well beyond a redshift of 10 with the Hubble Tension being non-existent.

Slides:


In The Dark Matter Crisis by Elena Asencio and Pavel Kroupa. A listing of contents of all contributions is available here.

We had a recent case where a submitted comment to The Dark Matter Crisis did not appear in the system, the comment being swallowed. The user had to use a different browser to submit the comment which we then approved. In case you submit a comment and it does not appear, try another browser.

Preface by P. Kroupa: The Hubble Tension appears to be making astronomers and cosmologists go quite crazy. Many possible solutions involving exotic forms of time-dependent dark energy, which by itself is not at all understood physically, are being proposed with complicated calculations underscoring the mathematical sophistication. The mundane fact that our Local Group is in a very large under-dense region, well known as the “KBC void”, is apparently near-to-completely ignored. But such a void is well established by observational data. It automatically leads to the galaxies within it falling to its sides that are a few hundred million pc away. Us, the observers, thus see an apparently faster expanding local Universe since the galaxies are falling somewhat faster away from us than they would only through the Hubble flow, and this is what the Hubble Tension is all about. The bulk flow of the galaxies on the 200 million pc scale has recently been confirmed. With his second research paper, Bonn-University undergraduate physics student Sergij Mazurenko and collaborators explain the observed decrease of the Hubble parameter with increasing redshift.

With this guest-post, Portsmouth-astrophysicist Indranil Banik describes the contents of the research paper “The redshift dependence of the inferred H0 in a local void solution to the Hubble tension” by Mazurenko et al. (2025, MNRAS, 536, 3232).

Cosmology is currently in a crisis known as the Hubble tension, a statistically significant discrepancy between the apparent local expansion rate of the Universe and its predicted expansion rate in the standard cosmological paradigm based on extrapolating observations in the infant Universe. The expansion rate is summarised by the Hubble parameter H, which is the rate at which the logarithm of the distance between any two objects increases with time if the objects are unaffected by any other forces beyond cosmic expansion. The present expansion rate is denoted H0, with the 0 subscript indicating the present value. We can get H0 in two main ways, which we discuss below.

The early route to H0

The prediction of H0 in the Lambda-Cold Dark Matter (ΛCDM) standard cosmological paradigm is based on fitting the pattern of anisotropies in the cosmic microwave background (CMB), a nearly uniform and almost perfectly blackbody radiation thought to come from redshift z = 1100, when neutral atoms first formed and allowed light to propagate freely. At earlier times, electrons and protons could not bind together into atoms as the Universe was too hot. Free electrons readily absorb and scatter photons, making the early Universe opaque. At recombination about 380 kyr after the Big Bang, the Universe cooled sufficiently for neutral atoms to form, allowing light to travel almost unimpeded across cosmological distances.

The CMB has tiny temperature fluctuations at the 0.001% level. These fluctuations are more pronounced on some angular scales, with the temperature variations being maximal on degree scales. The fluctuations are thought to have been seeded by quantum processes at very early times. The key point is that these processes imprinted sound waves in the plasma, which then propagated until recombination. Similarly to how we can guess the size of a musical instrument just by listening to it, we can also learn about the primordial Universe by observing the CMB and seeing which angular scales are noisier. In the case of a musical instrument like a guitar string, the sound it makes is mostly at multiples of a certain frequency known as the fundamental frequency. This is because the length of the string must correspond to a half integer number of wavelengths, as otherwise there would be destructive interference, causing the string to make little noise at the corresponding frequencies. If we decompose the sound made by the string into different frequency components, it would not be too hard to learn the size of the string.

Similarly, the temperature fluctuations in the CMB plotted as a function of angular scale (its angular power spectrum) can teach us a lot about the early Universe. The CMB power spectrum is usually plotted going from larger to smaller angular scales. The first peak corresponds to the fundamental frequency, with subsequent peaks being like harmonics. The acoustic oscillations in the CMB power spectrum arise because tiny initial density fluctuations grow at first, but then radiation pressure causes the fluctuation to start oscillating, so eventually an initially overdense region becomes maximally overdense, then underdense, then overdense again at later times, and so on. If we think of fluctuations on different length scales, the time needed for the oscillations to take place is longer for larger wavelength perturbations. The first peak in the CMB corresponds to an initial overdensity growing to a maximum in the fixed time between the Big Bang and recombination, while the second peak corresponds to an initial overdensity reaching maximum underdensity, and so on.

By modelling the CMB power spectrum in ΛCDM, it is possible to deduce its cosmological parameters, one of which is H0. Radiation rapidly becomes unimportant and is already sub-dominant by the time of recombination, so the distance to the surface of last scattering and the age of the universe are mainly governed by H0 and the matter density parameter ΩM, which tells us the fraction of the cosmic critical density in the matter component. The rest is presumed to be dark energy.

In this way, observations of the infant Universe can tell us H0 in the context of ΛCDM. This value of H0 must be thought of as a prediction of a model calibrated to fit the CMB anisotropies, which are thus obviously not evidence in favour of the model. We could obtain evidence in its favour if we obtain H0 in some independent way and obtain agreement with the early Universe value, which is usually called the Planck value of H0 because the most precise CMB observations come from the Planck satellite launched by the European Space Agency (ESA).

The local estimate of H0 and the Hubble tension

In the local Universe, we can observe the distances and redshifts of galaxies. As discussed in our recent paper (Mazurenko, Banik & Kroupa 2025), this can yield an estimate of H0 under the assumption that redshift arises purely from cosmic expansion over the light travel time of a photon, whose wavelength grows in direct proportion to the size or scale factor of the Universe. Galaxies and supernovae (SNe) observed at larger distances are observed further back in time, so there has been more cosmic expansion while light from them was travelling to our detectors. We would expect a linear relation between redshift and distance, which explains the empirical Hubble law. Its only free parameter is the local redshift gradient, which directly tells us H0. Excitingly, the value is about 8.3% larger than the Planck value of H0. The margin of error is only 1.7%, arising mostly from the local redshift gradient.

The redshifts of galaxies and SNe are obtained spectroscopically and cannot contain observational systematics at this level, so attention has focused on distance measurements. While these have historically been challenging, many many studies now converge on the extragalactic distance scale. For instance, the Hubble and James Webb Space Telescopes yield almost exactly the same distances for galaxies analysed by both (Riess et al. 2024). There are also a wide variety of distance indicators beyond the traditional Type Ia SNe and the Leavitt law of Cepheid variables, itself much better calibrated thanks to trigonometric parallaxes obtained by the ESA Gaia mission. We can also obtain distances using Type II SNe, the tip of the red giant branch (TRGB), megamasers, surface brightness fluctuations (SBF), the Tully-Fisher (TF) and Fundamental Plane (FP) galaxy scaling relations, and several other techniques (Uddin et al. 2024). The local redshift gradient is consistent between these techniques, with the Cepheid-SNe route being the most precise.

Reconciling the H0 estimates using a local void

Redshifts do not arise only from cosmic expansion. In nearby galaxies, the main source of redshift is velocity along the line of sight, via the familiar Doppler effect. There is also gravitational redshift (GR), which is apparent in observations from near the black hole at the centre of our Galaxy, or more subtly from the different clock tick rates at the top and bottom of a building.

Bearing this in mind, it is clear that the local redshift gradient does not arise purely from cosmic expansion: structures in the Universe lead to additional redshift or blueshift. To minimise the impact of peculiar velocities, observers typically measure the redshift gradient over distances of about 100—600 Mpc, with the upper limit chosen because the assumption of a constant expansion rate breaks down if we go too far back in time. Estimates of H0 using the local redshift gradient over this distance range are subject to scatter or cosmic variance due to local structures. In ΛCDM, this cosmic variance is only about 1.3% (Camarena & Marra 2018).

To inflate the local redshift gradient, we would need to be in a local underdensity or void. We can see this in two ways, bearing in mind that the Universe was quite homogeneous at early times. Firstly, to get outflow away from us, we would need the gravitational field to point outwards, which implies the density must be larger at larger distances. Secondly, peculiar velocities that are directed outwards from some region would cause its density to decrease as matter leaves it, making it less dense than average.

Observations do in fact show that we live within the KBC void (Keenan, Barger & Cowie 2013). It has an apparent underdensity of 46% out to 300 Mpc, which is incompatible with ΛCDM expectations at 6σ confidence (Haslbauer, Banik & Kroupa 2020). The KBC void implies that structure formation must have been faster than predicted by ΛCDM. A simple estimate shows that outflows from the KBC void would inflate the local redshift gradient by about 11%, depending on exactly how the void has grown over time. In that study, we set up a detailed model for the formation of the KBC void from a slight initial underdensity on a similar comoving scale. We were able to obtain a good match to the observations. Later, we found that the velocity field predicted by the model in the local Universe agrees quite well with observations in terms of the bulk flow curve (Mazurenko et al. 2024). This important success is discussed in DMC86.

The redshift dependence of the Hubble tension

If the Hubble tension is caused by a local void, we would expect the problem to decay with redshift because observations beyond the void would not be affected by it. In our most recent paper, we performed the first test of this prediction using the concept of H0(z), the value of H0 inferred by observers using data in a narrow redshift range centred on z. The observational studies we used were Jia et al. 2023 and 2025, which in turn use a combination of SNe, baryon acoustic oscillation (BAO) and cosmic chronometer (CC) data. The authors are particularly careful to minimise correlations between the value of H0 inferred from data in different redshift bins. We used three methods to mimic the procedure used by observers to obtain H0(z), though the third method is most comparable to how observers actually analyse their data.

Figure 1: Predicted H0(z) curves in our void models (solid lines showing different void profiles) and in observations (points with error bars). The reference H0 values shown using horizontal bands are from the local redshift gradient (cyan) and the ages of the oldest Galactic stars (pink) since their ages tell us that the Universe must be sufficiently old to accommodate them and this sets a cosmology-model-independent limit on how rapidly the Universe could have expanded. The Planck value is shown using the horizontal olive line, with uncertainty indicated towards the left. For illustrative purposes, the dashed red line shows the Gaussian void profile without the GR contribution. Adapted from figure 3 of Mazurenko, Banik & Kroupa (2025).

Our results are shown in Figure 1, where the points with uncertainties show observational results in different redshift bins, the widths of which are shown using the horizontal errors. The solid curves correspond to the predictions of our model, with different colours showing results for different void underdensity profiles. For comparison, the dashed red curve shows results for the Gaussian profile without the GR contribution, allowing it to be quantified by comparison with the solid red curve. We also show several reference values of H0 using horizonal bands. The local redshift gradient using the Cepheid-SNe route is shown in cyan, while the Planck value is shown as a horizontal olive line with uncertainty indicated towards the left. The thick pink band shows H0 from the age of the Universe as quantified from the oldest stars in the Galactic disc and halo, allowing 0.2 Gyr for the first stars to form (Cimatti & Moresco 2023). Varying this by a factor of 2 leads to a slightly wider allowed range of H0, as shown by the lightly shaded pink region.

Figure 1 shows that the observational results of Jia et al. 2023 and 2025 straddle the model predictions. Since the model parameters are the same as in Haslbauer et al. 2020, this success is a confirmation of an a priori prediction. The Hubble tension does not persist in high redshift datasets. The Hubble tension must therefore be solved at late times in cosmic history — unless it is indeed a local issue.

A descending H0(z) curve is not expected if the Hubble tension is solved prior to recombination. Such early-time solutions introduce additional physics such that the CMB can be fit despite using the high local H0, but then we would expect to infer this value from data at any redshift below 1000. Another strong argument against early-time solutions is that they predict the Universe to be about 8% younger than in the Planck cosmology. This is not in line with the ages of the oldest Galactic stars, as shown by the pink bands in Figure 1. These actually show that stellar ages in combination with uncalibrated BAO data are in good accord with the Planck H0. While this result is based on absolute stellar ages, similar results can be obtained using instead differential ages, as done in the CC technique (Cogato et al. 2024; Guo et al. 2024).

The decay of the Hubble tension with redshift is a strong prediction of any local solution to the Hubble tension. The observational confirmation of this prediction must therefore be considered strong evidence in favour of a local solution as is given by the KBC void. Moreover, the detailed form of the decay agrees quite closely with the predictions of our local void model, adding to the growing body of evidence in its favour.


In The Dark Matter Crisis by Elena Asencio and Pavel Kroupa. A listing of contents of all contributions is available here.

We had a recent case where a submitted comment to The Dark Matter Crisis did not appear in the system, the comment being swallowed. The user had to use a different browser to submit the comment which we then approved. In case you submit a comment and it does not appear, try another browser.

Preface by P. Kroupa: When I was a PhD student at the IoA in Cambridge, I remember Donald Lynden-Bell often talking about how important it would be to measure the motion of the Large and Small Magellanic Clouds. At that time this was well beyond me and I continued merely counting my low-mass stars. But only some 4 years later, when I was a postdoc in Heidelberg, we actually did provide the very first measurement in 1994 (“On the motion of the Magellanic Clouds“), and three years later another much improved one (“The Hipparcos proper motion of the Magellanic Clouds“). Today, thanks to the Gaia astrometric mission, other astronomers have even measured the internal rotation of the Clouds (van der Marel & Sahlmann 2016). That is stunning. But at a more fundamental level, that the Small Magellanic Cloud (SMC, about 10% of the baryonic mass of the Large Magellanic Cloud; LMC) has been orbiting the LMC (about 10% of the baryonic mass of the Milky Way; MW) at a current distance of about 20 kpc multiple times, while both are falling, or racing, past our MW at a distance of about 50-60 kpc, allows us to directly probe for the existence of dark matter halos, as are predicted by the standard model of cosmology. All three galaxies should be deeply immersed in their mutual dark matter halos. This gives a lot of dynamical friction if dark matter particles exist. We published a detailed study of this in the journal Universe just this year (The Relevance of Dynamical Friction for the MW/LMC/SMC Triple System), with the result that dark matter particles cannot be there, definitely not, as otherwise the SMC would have merged into the LMC a long time ago. It turns out that the dynamical friction force is comparable to the gravitational attraction force between the LMC and the SMC, causing very rapid orbital decay. With this result, it is then not surprising that actual simulations of galaxy formation in the ΛCDM model indeed do not produce such triple systems. The Magellanic Clouds are thus impossible in the ΛCDM theory of cosmology. This result is reported in this very deep top-end study published as part of the PhD research work by Moritz Haslbauer in the journal Universe (Haslbauer, Banik et al. 2024, The Magellanic Clouds are very rare in the IllustrisTNG simulations).

_The present DMC97 blog by Elena Asencio and Indranil Banik describes this stud_y:

When we look at the night sky, we mostly see stars in the Milky Way (MW) galaxy. Only a few external galaxies are visible to the naked eye. The nearest ones are the Large and Small Magellanic Clouds (the LMC and SMC, respectively), which are visible from the southern hemisphere. The Magellanic Clouds (MCs) are actually considered satellites of the MW, and are almost certainly the two most massive ones. Observations have also revealed a stream of gas known as the Magellanic Stream (MS) which circumnavigates the Galaxy. This is thought to have come from a past interaction between the MCs as they fell in towards the MW. All this makes the MW-LMC-SMC triple system really interesting. And its proximity means we have quite precise data on important quantities like positions and velocities in 3D.

This led us to explore the MCs in more detail. We realised that the high masses of all three objects mean that in the standard Lambda-Cold Dark Matter (ΛCDM) model of cosmology, they should experience significant Chandrasekhar dynamical friction from their overlapping DM halos, which are expected to be way more massive than the already substantial stellar and gas masses of these galaxies. All this dynamical friction means that the MCs should quickly merge with each other and with the MW. That inevitably leads to the first-infall scenario, the idea that we are just now living at the time when the MW, LMC, and SMC just happen to have fallen into each other in a chance three-body interaction. We wondered how likely this is in ΛCDM cosmological simulations, or, in other words, how likely it is that the MC analogues in these simulations are as close to each other and to the MW as the observed system, both in position and in velocity.

Through our results, we identified two major ways in which the MCs are unlikely in ΛCDM, which we discuss below.

Galactocentric distance and mass of the LMC

A particularly simple way to understand the problem is to forget about the SMC and just focus on the MW and LMC. To select LMC analogues in the cosmological ΛCDM simulations, we chose the most massive satellite in each of the simulated MW-like systems, and we required them to have as much stellar mass as the observed LMC. This left us with 5568 systems. We then plotted the mass distribution of the selected LMC analogues vs their Galactocentric distance (see Fig. 1). This showed that there are no systems that simultaneously have the large total mass and the small Galactocentric distance of the LMC. These extreme properties (with respect to ΛCDM expectations) of the LMC correspond to a tension of > 3.75σ with the ΛCDM model.

Fig 1: This figure shows the distribution of the relative MW-LMC distance and the total LMC mass of analogues in the combined sample of TNG cosmological simulation runs (5568 objects). The red dot with vertical error bar refers to the observed relative distance dMW−LMC = 50.0 kpc and the total LMC mass deduced from five Galactic stellar streams near the LMC (these give MLMC = 18.8+3.5−4.0 ×1010 M⊙; Shipp et al. 2021). None of the 5568 simulated objects have a lower Galactocentric distance and higher mass than the LMC. Credit: figure 7 in Haslbauer et al. 2024.

Phase-space density of the MCs

The other unlikely aspect of the MCs is their proximity in phase space, i.e., the LMC and the SMC are very close to each other and have a low relative velocity. In the first infall scenario — the most likely formation process of the MCs within the ΛCDM framework — the MCs would have fallen into the MW from large distances. This typically yields high relative velocities between the MCs and makes it unlikely that they are very close to each other.

To quantify the likelihood of finding the observed MCs velocity-distance configuration in ΛCDM cosmological simulations, we defined the MCs’ phase space density as _f_MC ≡ 1/(d × v)3, where d is their relative separation and v is their relative velocity. We then obtained this value for the LMC-SMC analogues in the simulation, and compared it with the observed one. Our results show that the observed value is way higher than the one obtained for the systems in the simulation (see Fig. 2). By extrapolating the simulated f_MC distribution up to the observed value, we inferred that a phase-space distribution like the one observed for the LMC-SMC system would only occur about once in every 13000 analogue systems. This entails a tension between the LMC-SMC system and the ΛCDM model of 3.95_σ.

Fig. 2: This figure shows the cumulative distribution of f_MC values obtained from the TNG50-1 run fitted with a linear (dashed black line) and a quadratic (solid black line) function in log10-space. The red dashed line marks the observed value of f_MC,obs = 9.10 × 10−11 km−3 s3 kpc−3. The intersection between the linear (quadratic) fit and the red dashed line yields a p-value of 1.62 × 10−3 (7.81 × 10−5), which corresponds to a 3.15_σ (3.95_σ) tension. Since our analysis of the residuals for each of these fits shows that the quadratic one provides a better fit to the data, we chose this as our nominal result. Combining this 3.95_σ_ tension with the tension described in the previous section gives a joint tension of > 5.1_σ_. Credit: figure 2 in Haslbauer et al. 2024.

Relation to the MW Disc of Satellites (DoS)

The MCs are also part of a structure known as the Disc of Satellites (or DoS, for short). The DoS is a flattened and kinematically coherent plane of satellite galaxies perpendicular to the Galactic disk. These properties make it very hard to reconcile the DoS with ΛCDM expectations given that, in this paradigm, most substructures are accreted through DM filaments, which are too wide to match the flatness and the kinematic coherence of the DoS. In fact, Pawlowski & Kroupa 2020 found that the likelihood of having a structure like the DoS in a ΛCDM universe is 0.04%, confirming the previous assessment by Kroupa, Theis & Boily 2005.

In our study (Haslbauer, Banik et al. 2024), we assessed the probability that the LMC brought in the satellites of the DoS together with itself as it fell into the MW potential. For this to be a plausible scenario, LMC-like galaxies should have at least 10 satellites within a radius of 20 kpc. In the ΛCDM cosmological simulations, we found that this kind of system usually has only one satellite galaxy within 25 kpc, which makes this scenario very unlikely as a possible explanation for the DoS.

This implies that, if the LMC did not bring in the other satellites in the DoS, then the MCs should have an independent origin from the DoS. By quantifying how likely it is for the LMC to fall into a pre-existing DoS, we found that there is a 3% chance of this happening. This corresponds to a 2.17σ tension with ΛCDM.

Adding up this additional tension to the independent problems described in the previous sections leads to a total tension with the ΛCDM model of > 5.2σ.

We note that, in alternative models that do not rely on DM to describe the dynamics of galaxies (e.g. MOND), the formation of the DoS can be naturally explained as the result of a past MW-M31 interaction which led to the formation of tidal tails in the MW, and thus to the generation of flat and kinematically coherent substructures. Such a scenario does not work in ΛCDM because, among other reasons, the dynamical friction between the DM halos of the MW and M31 should have been strong enough to make them merge shortly after the interaction.

As we describe in the following section, the dynamical friction of the galactic DM halos is also likely to be the reason why ΛCDM has such difficulties reproducing the observed properties of the MCs.

The role of dynamical friction

Our main conclusion is that there is just way too much dynamical friction on the MCs in ΛCDM, and this is why they are so unlikely to look like they actually do. With less dynamical friction, the MCs could have been orbiting each other and their barycentre could have been orbiting the MW for quite a long time. It would still be somewhat unlikely to observe the MCs at perigalacticon, but not unduly so. This is because there would be multiple perigalacticon passages past the MW, so we could be observing the MCs just after any one of these close Galactic approaches. Similarly, the MCs could have been orbiting each other for quite a long time, in which case we could observe them close together. The relative velocity between the MCs is actually quite nicely consistent with them being on a bound orbit around each other, which seems odd if they fell in from large distances and only closely approached each other recently. Moreover, there must have been past close approaches between the MCs to form the MS (Lucchini 2024). We argue that an encounter within 30 kpc is required 1–4 Gyr ago, so the MCs must actually have had past close approaches, beyond their most recent one in the past few hundred Myr (the MCs interacted closely about 150 Myr ago, but this is way too recent to explain the MS). That would have slowed down the MCs relative to each other, which one might think explains why they have such a low relative velocity today. But this argument is not quite right because the dynamical friction is so strong that the galaxies would already have merged by now. Once they have a close encounter that gets them within 30 kpc of each other, they do not remain separate objects for a billion years.

To check this, Oehm & Kroupa 2024 conducted backward trajectory integrations, including the dynamical friction term, which causes the orbital energy to increase as you integrate backwards in time. These integrations show that the dynamical friction is so strong that there would not be a past interaction between the MCs of a sort that could form the MS. So whether you look at the problem forward in time as we do or backward in time as they did, the conclusion is similar: the observed MW-MCs system cannot form in ΛCDM.

Other astronomers are welcome to disagree with our strong conclusions, but they should demonstrate an analogue to the MW which has satellites analogous to the MCs in terms of stellar mass, and then show that the MCs have the high specific phase-space density of the observed MCs. Or even just show that an analogue to the MW can have a satellite as massive as the LMC at its low observed Galactocentric distance. These outcomes have to be reasonably likely, not just happen once in a million analogues to the MW, or be engineered as part of the initial conditions. This is a challenge that the ΛCDM community will not be able to rise to. Dynamical friction will continue to keep those researchers in the dark as to how the MCs actually got to where they are today.


In The Dark Matter Crisis by Elena Asencio, Moritz Haslbauer and Pavel Kroupa. A listing of contents of all contributions is available here.

We had a recent case where a submitted comment to The Dark Matter Crisis did not appear in the system, the comment being swallowed. The user had to use a different browser to submit the comment which we then approved. In case you submit a comment and it does not appear, try another

Preface by P.Kroupa: Cosmological science is undergoing a phase transition, or in other words, a paradigm shift, and regular discussions between natural philosophers are important. Anastasiia Lazutkina proposes a regular forum for this purpose:

I’m pleased to announce the launch of a new seminar series, “The Gravitation of Stellar Systems: From Stars to the CMB,” which will explore cutting-edge research on stellar systems, galaxies, and cosmology. These seminars will address key questions surrounding the nature of dark matter, the dark matter problem, and alternatives to Newtonian and Einsteinian gravitation, such as MOND.

This series will feature presentations by authors of recently published papers, providing a platform for in-depth discussions on the latest findings and theories related to cosmological models.

First Seminar: October 23rd, 2024

Our first seminar will feature Tobias Mistele, who will present his recent paper (Mistele et al., 2024) on gravitation in stellar systems. This will be followed by an open discussion.

If you’re interested in joining the seminar or have any questions, feel free to contact me for more details: a.k.lazutkina@gmail.com


In The Dark Matter Crisis by Elena Asencio, Moritz Haslbauer and Pavel Kroupa. A listing of contents of all contributions is available here.

We had a recent case where a submitted comment to The Dark Matter Crisis did not appear in the system, the comment being swallowed. The user had to use a different browser to submit the comment which we then approved. In case you submit a comment and it does not appear, try another

Guest blog by Xavier Hernandez.

Introduction by P.Kroupa: Prof. Xavier Hernandez originally suggested in 2012 to use the statistical differences in the observed velocities of the two component stars in very wide binary systems to test Milgromian vs Newtonian dynamics: if gravitation is Milgromian it will be stronger in the very weak field limit than Newtonian gravitation, and this should be observable by the velocity differences of the stars that orbit about each other at great separation being larger than expected for Newtonian gravitation to be valid. This has now become a major field of research with partially contradictory results. Xavier explains these and puts the newest research on a firm footing.

Introduction:

The best scientific description of gravity is Einstein’s theory of General Relativity (GR). However, assuming its validity at all astronomical scales leads to gravitational anomalies. For example, rotation velocities of disk galaxies are inconsistent with the predictions of GR; the theory predicts orbital velocities which, as in our Solar System, decrease with distance. And yet, observations clearly show rotation curves which at large distances remain flat at constant values.

In order to force an agreement between the theory and observations, the ad hoc hypothesis of Dark Matter was introduced; the assumption of the existence of a transparent, dominant component forming huge halos of unknown particles surrounding galaxies, and being responsible for their observed dynamics, despite the total absence of any direct evidence of the existence of this hypothetical particles. The internal distribution of dark matter within these hypothetical halos can then be freely adjusted to match the observed rotation curves. These gravitational anomalies are well known also
at larger scales, in galaxy clusters and at the cosmological level. It is plausible that if each galaxy is surrounded by a dark matter halo, galaxy clusters and the Universe as a whole, would harbour a similar fraction of this component, as it is indeed inferred under GR. The internal inconsistencies of the Dark Matter hypothesis are more subtle than simple total quantity accountancy can reveal. Driven by the necessity to confirm the Dark Matter Hypothesis, one of the largest endeavours of modern science over the last 4 decades has been the direct search for dark matter particles. In spite of the efforts of thousands of scientists over the world, not a single particle of dark matter has ever been detected, with plausible candidate particles being periodically discarded every few months.

It is crucial to appreciate that the gravitational anomalies attributed to Dark Matter appear always below a critical acceleration, which lead Moti Milgrom in 1983 to propose that GR is no longer valid below this critical acceleration, of a0=1.2 x 10 -10 m/s2. This idea is termed MOdified Newtonian Dynamics, MOND (or MilgrOmiaN Dynamics). One must bear in mind that the many observational verifications of GR occur always at acceleration scales over a million times larger than a0. Just like Newton’s theories fail as velocities approach the speed of light, it is proposed that GR has a validity limit at low accelerations. It must be appreciated that by construction, GR tends to Newton’s theories at low velocities, in turn, Newton’s theories were calibrated to fit the observed dynamics of the Solar System; all our physical theories have an empirical grounding, as well as validity limits once one explores regimes sufficiently distant from those over which the theory was calibrated. While still not a finished theory, MOND presents a very successful description of gravity at galactic scales without requiring the presence of any phantasmagorical undetected components.

The Wide Binary Star Test:

In order to attempt to distinguish between the two possibilities described above, I started to look for low acceleration systems which had nothing to do with galaxies. If GR and Dark Matter are correct, these other systems should not show any gravitational anomaly, while if MOND is correct, the same clear gravitational anomaly should be present in any low acceleration system studied. In 2012 I published a paper (Hernandez, Jimenez & Allen 2012) proposing to use wide binary stars for this test. Once two Solar-type stars are separated by more than a few thousand astronomical units (AU, the distance between the Earth and the Sun), the internal acceleration of the system will be in the regime where gravitational anomalies appear in galaxies. Crucially, any anomaly found in wide binaries can not be attributed to dark matter; within a GR framework, the rotation curve of the galaxy determines the distribution of dark matter, which is also constrained by the vertical motion of stars in the Milky Way disk, too much dark matter locally and the vertical oscillations of stars in the Solar Neighbourhood would be boosted beyond consistency with observations. If Dark Matter exists, locally there can not be more than 0.01 solar masses of it per cubic pc (1 pc=200,000 AU), and hence an amount within the orbit of any wide binary which is negligible when compared to the masses of the stars themselves.

The problem is complicated by the fact that the orbits of the wide binaries in question last thousands of years, so just like with galaxies, we cannot actually trace full individual orbits. Contrary to the case for galaxies, we do not know the orientation on the plane of the sky of a particular wide binary or the ellipticity of its orbit. We can measure only instantaneous values of the projected separation between both stars and the relative velocity between them, S2D and V2D, respectively. Fortunately, there are literally thousands of these wide binaries. Both Newtonian physics and MOND make definitive predictions as to what one should observe for distributions of (S2D, V2D) values. If MOND is correct, a 20% velocity excess over Newtonian expectations should be apparent in the data when crossing over to separations above 3000 AU, for large distributions of wide binaries.

After a wait of over a decade, finally in 2022 data of the required quality appeared with the third data release of the GAIA satellite (Figure 1). The data confirm the decades old predictions of MOND and show Newtonian gravity to be incompatible with observations in the low acceleration regime. Given the low velocity scales of these stars, of only about 1 km/s, GR coincides with Newtonian gravity, and hence is also refuted by the data, at the low acceleration scales probed. This result eliminates any astrophysical grounding for the Dark Matter hypothesis, which can now be seen as modern day “Eather”. My results have appeared in various refereed publication in the Monthly Notices of the Royal Astronomical Society (MNRAS), most notably (Hernandez 2023) and Hernandez et al. (2024).

Figure 1: The GAIA satellite.

Following a completely independent and complementary approach to mine, with distinct hypothesis and assumptions from the sample selection strategies to the statistical tests performed, Kyu-Huyn Chae of Sejong University in Korea obtained exactly the same results. Kyu was careful, as I have also been, to include in the data a Newtonian region where both MOND and GR predict the same results, and checked always that the high acceleration (tight binaries, i.e. binary stars with separations much smaller than 3000 AU) region yields results in accordance with Newtonian predictions, which hence validates the full procedure. Kyu’s refereed papers have appeared in the Astrophysical journal, a few examples are: Chae (2023) and Chae (2024).

The Banik Controversy:

Recently however, a couple of papers have appeared erroneously claiming that the GAIA wide binaries in question are consistent with Newtonian expectations, and show no signal of MOND. The first was a paper by Banik et al. (2024). In this paper the authors look only at the low acceleration region, i.e., they do not include tight binaries to validate the many details of the procedure. This means that they have no firm checks to confirm the results of their study in a region where there is no uncertainty as to what the answer should be. Looking only at the low acceleration region of the wide binaries, they report a 19-sigma preference of a Newtonian model over a MOND one. This implies that the Newtonian model is overwhelmingly preferred over the MOND variant. In Science, such strong confidence intervals above 6-8 sigma are actually unheard of. Even a 5-sigma preference represents an absolutely evident dominance of the preferred model.

This yields the clue to finding the main error in the Banik et al. (2024) paper, since looking at the velocity distributions of both models being compared, it is obvious that both are very similarly comparable to the data. In one case, it is clear that the MOND model is actually better than the Newtonian one, the 5-12 thousand au range in figure 12 in Banik et al. (2024). The undisputed preference of the Newtonian model which a 19-sigma preference implies is nowhere to be seen.

This is actually due to the fact that the authors failed to account for the presence of observational noise in the data used when comparing to the models being tested. It is entirely standard when comparing alternative models to data which include noise, as is the case with the observational noise present in the GAIA data being used, to either introduce a degree of blurring into the theoretical models before comparing to the data, or to repeat the procedure many times allowing for mock re-samplings of the data to shift the observations about in accordance with the level of noise present. However, the authors of Banik et al. (2024) did neither, and compared pure theoretical models to the noisy GAIA data only once. This not only produces the implausible 19-sigma preference, when the two models are quite comparable, but also biases the results towards the Newtonian model. The reason for this last fact is that the MOND model without noise can not reproduce the shift to low velocities of a MOND reality to which noise has been added, shifting the velocities to values inaccessible to the pure theoretical MOND model. The pure Newtonian model has no such problem, as it is inherently shifted towards lower velocities. These and other of the many errors making the conclusions of the Banik et al. (2024) paper untenable appear in detail in a formal rebuttal which I and Kyu recently published (Hernandez, Chae & Aguayo-Ortiz 2024).

The Cookson Controversy:

More recently a paper authored by S. Cookson appeared in MNRAS, also claiming that the GAIA data show no indication of a non-Newtonian gravitational anomaly (Cookson 2024).

The author of this paper collaborated with me previously, and was in fact responsible for the coding of the software used in my 2022 paper, not of implementing any of the physics or dealing with the astronomical context. Unfortunately, he failed to realise that the Newtonian line he uses for comparison against the GAIA data assumes a total binary mass of 2.0 Solar masses. I calibrated this line from the dimensionless results of Jiang & Treamine (2010) assuming this generic value for the mass of the binaries, to be used only as a rough reference before doing any detailed statistical analysis which might confirm or reject any deviation from Newtonian gravity. This means that before concluding anything about what the data are saying, the author should have re-scaled the Newtonian line downwards to account for the fact that the average total binary mass of the binaries being treated is of about 1.4 solar masses, as shown in my 2023 paper using the Newtonian line in question. In terms of velocities, this translates to a small offset of sqrt(2.0/1.4)=1.2, only a 20% reduction in the velocity amplitude is needed for the line shown. Crucially, this is the same velocity offset which MOND predicts and which I and Kyu have found in the data.

Figure 2 shows fig.6 from the Cookson paper, where the large separation binaries are claimed to be consistent with the Newtonian line shown. That this line has to be shifted downwards is not only evident from the fact that I calculated it assuming a larger binary mass than the one in the Cookson sample, but also from the obvious miss-match between the data and the Newtonian line in the tight binary, high acceleration Newtonian region. In the Cookson data, the Newtonian region appears as sub-Newtonian! The whole purpose of including these tight binaries is to serve as an internal check on the whole procedure, where results must match the Newtonian prediction before attempting any inferences on what might be going on in the low acceleration region in dispute. It is hard to understand the error in question, given that the assumed value for the mass calibration of the Newtonian line of 2.0 Solar masses is not only made explicit in my 2023 paper which Cookson cites, but also in my 2022 paper, where Cookson is listed as an author! Figure 2 shows fig.6 from the Cookson paper, where the high separation region towards the right appears consistent with the Newtonian line shown, although the high acceleration tight binary region to the left is clearly sub-Newtonian, which makes no sense. Figure 3 shows this same figure again, but where I have added a re-scaled Newtonian line taking into account that the average masses of the binaries being plotted are not 2.0 but closer to 1.4 solar masses. The tight binaries towards the left are now clearly consistent with the re-scaled Newtonian line, and the wide ones towards the right show the same anomaly which Kyu and I have reported, see e.g. Kyu’s fig.13 in Chae (2024). The above is particularly clear if we consider only the separation range below 0.15pc, and disregard statistically irrelevant bins such as the last one which includes only 6 binaries.

Figure 2: This is figure 6 from Cookson (2024).

Figure 3: The same as Figure 2 to which I have added here a re-scaled Newtonian line in accordance with the average total masses of the sample being close to 1.4 Solar masses, and not 2.0 Solar masses, as I had assumed originally when calculating the line in question.

Further errors appear in the Cookson paper, for example, the extension of the wide binary sample out to immense projected separations of 0.5 pc, i.e. 100,000 AU. Because of the effects of tides and stellar encounters over the lifetimes of these binaries, present projected separations this large are informing us mostly about the dynamical environments of these systems, and not about their internal gravity. That is why Kyu and Banik et al. stop at 0.15 pc, Pittordis & Sutherland (2023) stop at 0.1 pc, and, over an abundance of caution, I stop at 0.06 pc. Finally, the Cookson paper pretends to call into question the results of Kyu and I, which include independent and careful statistical analysis showing the full velocity distributions of the GAIA binaries to be consistent with detailed Newtonian predictions in the high acceleration regime, and with detailed MOND predictions in the low acceleration one. And yet, the Cookson paper includes no detailed statistical analysis of the velocity distributions at all. Only RMS values on arbitrarily defined bins are presented in the Cookson paper, a rough qualitative first order comparison which I was originally using, and which hence does not show his study to be consistent with Newtonian gravity, or not to be consistent with MOND in any of the regimes probed at any level of detail.

NOTE added by P.Kroupa: In view of the results above, DMC 93 explains three additional major recent advances which compellingly tell us that MOND indeed appears to be the correct effective description of gravitational dynamics.

The interested reader is directed to the papers cited above for a more thorough exposition of the problem and the various details involved, many of which have been excluded from this brief introduction.


In The Dark Matter Crisis by Elena Asencio, Moritz Haslbauer and Pavel Kroupa. A listing of contents of all contributions is available here.

We had a recent case where a submitted comment to The Dark Matter Crisis did not appear in the system, the comment being swallowed. The user had to use a different browser to submit the comment which we then approved. In case you submit a comment and it does not appear, try another browser and/or send us an email.

Guest post by Indranil Banik

I recently attended a conference at the Aristotle University of Thessaloniki in Greece about issues faced by the ΛCDM standard cosmological paradigm. In a strange twist of fate, I could not get a direct flight from Edinburgh and actually had to go via Bonn, where I did a Humboldt fellowship for three years.

Day 1

The conference opened with a talk by Avi Loeb on foundational issues in cosmology. His unique perspective was a great way to open the conference, especially given the many failures of ΛCDM that have arisen in recent years. He went into why a phantom equation of state for dark energy is very unlikely theoretically, even though it seems to be preferred by some observations, including results from the recent Dark Energy Spectroscopic Survey (DESI Collaboration 2024).

The next talk was by Pavel Kroupa, who went into various important constraints on any cosmological model. One of these is the cosmology-independent determination of the age of the Universe from the ages of the oldest stars and globular clusters (Cimatti & Moresco 2023). In the past, such work was difficult because one really needs to know the trigonometric parallax of a star to know its absolute magnitude and thus its mass. Applying stellar evolution theory then tells you how old the star must be if you are observing it near the end of its life. Recently, parallax measurements have improved drastically and now extend out much further thanks to the Gaia mission.

The conference was of course dominated by discussions about the Hubble tension. This relates to the present expansion rate of the Universe, denoted H0. We can obtain H0 by looking at the 0.001%-level anisotropies in the cosmic microwave background (CMB), which we think was emitted at redshift 1100 when neutral hydrogen atoms first became stable in the rapidly cooling infant Universe. It should be obvious that one cannot directly obtain H0 from observations of such early times, but analysing
Planck observations of the CMB in the context of ΛCDM does give a precise value for H0. The other way to get H0 is based on the fact that as we look further away, we are looking back in time, when the Universe was smaller. Since photons redshift with the expansion of the spacetime fabric, photons from further away are redshifted to a greater extent. H0 can be obtained from the gradient of redshift with respect to distance. This relation has some curvature of course, so the relevant quantity is the limiting value of the gradient as one gets all the way down to redshift zero and reaches the present epoch. In a homogeneously expanding universe, this redshift gradient is H0 divided by the speed of light. The H0 value obtained in this way is typically close to that found by the SH0ES team using Cepheid variable stars and Type Ia supernovae to get distances, which are then combined with spectroscopic redshifts (these are typically much easier to obtain). There is clear water between this locally determined H0 and the Planck value, as shown in Figure 1. This disagreement is known as the Hubble tension, which has plunged cosmology into a crisis. We have all heard that the Universe is expanding, but how fast?

FIGURE 1: The present expansion rate of the universe, estimated from the local gradient of redshift with respect to distance (blue band) and by extrapolating Planck observations of the CMB in the infant Universe using the ΛCDM model (red band). The expansion rate H0 is inversely proportional to the age of the Universe, which can be obtained independently of cosmology from the oldest stars and globular clusters in the Galactic halo. The matter density parameter is taken from late Universe probes
(Lin et al. 2021) and is accurate enough to introduce negligible uncertainty into the H0 estimate, which is shown using black points with error bars. Different points vary the average time needed to form the eleven objects carefully chosen by Cimatti & Moresco for the age determination. The nominal formation time of 0.2 Gyr is shown with a larger dot. The factor of 2 uncertainty in this barely affects the resulting H0, which strongly favours the Planck determination. Credit: Vasileios Kalaitzidis with funding from the Royal Astronomical Society.

The age of the Universe can help to arbitrate this battle of egos between the Planck and SH0ES teams for the true expansion rate of the Universe today. The age constraint can be converted into a constraint on H0. This is because a higher expansion rate implies a younger universe. Figure 1 shows that the age of the Universe strongly prefers the Planck background cosmology with a low H0. The agreement is remarkably good considering that observations of the CMB at redshift 1100 when the universe is not even 400,000 years old can only indirectly determine the age of the universe in the context of a cosmological model. Extrapolating this model billions of years to a time when the universe is over a thousand times larger leads to a predicted age which is quite close to the age obtained completely independently by combining observations of the oldest stars and globular clusters with our understanding of stellar structure and evolution. This is completely unrelated to the physics of sound waves in the baryon-photon plasma in the infant universe. While that is undoubtedly a major success story for ΛCDM, the high local redshift gradient is another test at late times which ΛCDM fails very badly.

Eleonora di Valentino went into the observational evidence for this Hubble tension in some detail. She has previously compiled lists of all the major early-time and late-time probes. An updated version of her results is shown in Figure 2 based on Riess & Breuval 2024. It is clear that the high local redshift gradient is not a measurement error. In general, any solution to the Hubble tension should explain the CMB power spectrum and the local redshift gradient. It is generally no longer considered viable to achieve that by finding some excuse for why all the different ways to measure distances in the local Universe are off by the same amount in the same direction despite using different techniques and being published by different teams. There are no reasonably precise local measurements that give a smaller redshift gradient than the Planck prediction, even though this should happen about half the time if the Universe had followed ΛCDM and the Hubble tension arose merely because observational uncertainties had been underestimated.

FIGURE 2: Different estimates of H0, with early time measurements shown above the dotted line and the local redshift gradient shown below the dotted line (the age constraint in Figure 1 is not shown here). Results using similar techniques are grouped together and shown in the same colour. Reproduced from figure 10 of Riess & Breuval 2024.

Leandros Perivolaropoulos attempted to pin down just when the expansion history deviated from the Planck cosmology. There seems to be a growing consensus among cosmologists that if H0 is inferred from data at higher redshift, the result is lower and more in line with the Planck value. Meanwhile, inferring H0 from lower redshift data gives a higher value. This was also the subject of a talk by Maria Dainotti on the second day, who has been claiming for some time that there is a descending trend in H0 with redshift. Shortly after the conference finished, a preprint along similar lines was posted by Jia, Hu & Wang 2024. The deviation from the Planck background cosmology appears to arise specifically at late times/low redshift, as shown in Figure 3. This is just what would be expected in a local void scenario.

FIGURE 3: The inferred value of H0 from only the data in a narrow redshift bin, as a function of the bin centre redshift. Notice how H0 quickly decreases from the high local value down to the Planck value and then remains roughly flat at that level. Reproduced from figure 1 of Jia et al. 2024.

An important constraint on the expansion history of the Universe is provided by baryon acoustic oscillations (BAOs), which are nicely illustrated in this very short video. BAOs serve as a standard ruler with fixed comoving size since shortly after recombination. BAOs were the subject of a talk by Stefano Anselmi, who proposed looking not at the peak of the BAO bump on an angular power spectrum, but at a different location on slightly smaller angular scales. Compared to a smooth fit, the power spectrum first goes down and then up to a peak, which is the BAO bump. It was suggested to find the point where relative to a smooth fit, the power spectrum after going down gets back up to the same power as for the smooth fit. This should be somewhat more immune to various systematic effects, especially in models beyond ΛCDM.

After this, the focus shifted more towards structure in the Universe. An important talk on this area was delivered by Elena Asencio, a PhD student in Bonn and administrator of the MOND community mailing list. The topic was the galaxy cluster collision known as El Gordo (Asencio, Banik & Kroupa 2021). This is an interacting pair of galaxy clusters at redshift 0.87 with a total mass of about 2 × 1015 Solar masses. Detailed studies of the interaction imply a very high infall velocity of at least 2000 km/s when the separation of the clusters was twice the sum of their individual virial radii. Our study used an innovative two-step procedure to address whether the combination of high mass, redshift, and collision velocity is even allowed in ΛCDM. The two steps are shown in Figure 4, with our work focusing on the first step. This complements the second step of detailed but idealised non-cosmological simulations of two colliding clusters that pin down the required pre-interaction properties. El Gordo excludes ΛCDM at over 5 sigma confidence.

FIGURE 4: Large cosmological simulations can yield a predicted frequency for collisions between galaxy clusters of different masses, but they are not suited for reproducing the detailed morphology of an individual collision like El Gordo. For that, one needs a detailed simulation of the interaction itself. Among other things, this allows the evolution to be explored from all viewing angles at much finer timesteps. It also allows a much greater degree of fine-tuning to the initial conditions, especially in terms of the pre-interaction infall velocity and the impact parameter. In this way, the required pre-interaction state can be estimated. But since this is done in a non-cosmological simulation where one can have clusters of any mass and infall velocity at any redshift, it is not guaranteed that such a pre-interaction scenario is plausible in the ΛCDM cosmological model. It is therefore essential to consider both links in this chain, regardless of how many papers have to be combined to do so. Reproduced from figure 1 of (Asencio et al. 2021).

Despite some very dodgy claims by Kim et al. 2021 that El Gordo is fine in ΛCDM using a healthy dose of circular arguments, Elena later showed that the latest constraints on the weak lensing mass of El Gordo only slightly reduce the tension with ΛCDM (Asencio, Banik & Kroupa 2023). Her talk went through both papers, which are nicely summarised in this blog post. During the questions, I pointed out that uncertainties in the weak lensing mass are quite small. I also mentioned a recent study which confirms that the morphology of El Gordo can only be reproduced with an infall velocity so fast that the initial conditions are not compatible with ΛCDM (Valdarnini 2024). A half hour talk I gave about El Gordo is here, while a longer talk by Elena that goes through the statistical analysis more thoroughly is here. A blog post explaining the earlier 2021 paper is available here, though our 2023 paper is very short and well worth a full read. Our results on El Gordo strongly suggest that structure forms more efficiently than predicted by ΛCDM.

Day 2

The second day of the conference opened with a critical talk by Rick Watkins about his paper on the measurement of the bulk flow out to almost 400 Mpc (Watkins et al. 2023). The bulk flow is the average velocity of all the galaxies within a sphere of some fixed radius. If the magnitude of the resulting average velocity is plotted as a function of the radius of the sphere, you get a bulk flow curve. There are some technicalities one has to bear in mind, which are discussed further in this blog post. One of the most important is that observers have to make do with only line of sight velocities, treating these as vectors pointing along the line of sight whose vector average is then taken. Averaging all the galaxies in a sphere means that the results are not sensitive to the assumed value of H0, even though its assumed value would affect the peculiar velocity of any individual galaxy. The bulk flow can be thought of as a dipole in the redshifts of galaxies at a fixed distance from us. Rick explained how the bulk flow curve is incompatible with ΛCDM expectations at >5σ confidence once you go beyond about 230/h Mpc or about 320 Mpc. This is because the universe is supposed to be quite homogeneous on such a large scale, so a sphere with such a large radius should barely be moving as a whole in ΛCDM. The observed bulk flows are about quadruple the ΛCDM expectation. These results were also confirmed by Whitford et al. (2023), who reported “excellent agreement” with the Watkins et al. 2023 results out to 173/h Mpc or about 240 Mpc. At small radii, there is reasonably good agreement with the precisely measured peculiar velocity of the Local Group relative to the CMB. We would expect the bulk motion on the scale of a few dozen Mpc to be about the same as the Local Group velocity, which we expect arises mainly from quite distant structures (much like the motion of the Sun around the Galactic centre is mainly caused by the pull of stars near the Galactic bulge rather than nearby stars). It is very reassuring that this is indeed the case, though the values are not exactly the same, presumably because galaxies slightly beyond the Local Group do matter to some extent. The direction of the Local Group peculiar velocity and the bulk flow are also fairly similar, again as expected. The results presented by Rick are based on the CosmicFlows-4 catalogue, one of the most reliable compilations of extragalactic redshift-independent distances you can find. Combining these distances with spectroscopic redshifts is the basis for the bulk flow results and the significant tension they reveal with ΛCDM.

There was also a talk by Francesco Sorrenti on evidence that the local redshift gradient depends on the direction in which you observe. There have been a few quite detailed and more recent explorations of this issue (Kalbouneh et al. 2023 and Hu et al. 2024). It is noteworthy that the measured variation across the sky is of a similar magnitude to the 9% Hubble tension itself. This is what you would generically expect if the Hubble tension is caused by a local 9% effect, but you are not exactly in the middle of the local structure and thus see somewhat different results in different directions on the sky. Only in recent years has the sample size of supernovae increased sufficiently to allow such studies, though the distribution of supernovae is still very far from isotropic across the sky. Care needs to be taken when comparing to a theoretical model of the local velocity field.

Very shortly before the conference dinner, I gave my talk on the local void solution to the Hubble tension (similar talk here and short Conversation article here). I explained how our model predicted the bulk flow curve without further adjustments to the model. In particular, the Watkins et al. (2023) observations agree reasonably well with 2 out of the 6 models considered plausible a priori in the pioneering study of Haslbauer, Banik & Kroupa (2020). This bulk flow success (Mazurenko et al. 2024, referee: Rick Watkins) was nicely summarised by Elena Asencio and Abbe Whitford in an AstroBite.

At the dinner, I had some interesting discussions about my talk. I was mainly met with a possible objection regarding whether lensing effects by a local void would impact on the amplitude of the quadrupole observed in the CMB (Alnes & Amarzguioui 2006). This turns out to be only a very minor effect, with higher multipoles (smaller angular scales) being even less affected by a local void (see also Nistane et al. 2019). It is possible that since the impact of a local void on the CMB quadrupole adds to any intrinsic quadrupole, the extra cosmic variance in the observed amplitude of the CMB quadrupole explains why it is uncomfortably small compared to the ΛCDM expectation (it could also have been uncomfortably large). More generally, it is interesting that a local supervoid would largely not affect our observations of the CMB anisotropies. Since the idea of a local void model is to preserve the Planck cosmology at the background level and since a local void would not much affect the CMB anisotropies, the only thing one might need to worry about is any further changes to the physics at early times. But I think that enhancing structure on large scales requires a change to gravity on large scales, which would simply not affect the early universe because it was too small. In particular, the sound horizon at the epoch of recombination was obviously far smaller than the size of the KBC void.

Two other noteworthy talks on the second day were by Glenn Starkman and Joann Jones, who recently led a preprint on various CMB statistics that appear to be in tension with ΛCDM. The statistics were by no means standard statistics like the power spectrum, but involved some fairly complicated functions of it. These functions were almost certainly chosen to cause tension between the observed value of the statistic and the expected range in ΛCDM. This look elsewhere effect was not really addressed in the talks. For instance, one of the allegedly anomalous statistics is the large-angle correlation parameter, which quantifies the total power in the CMB when considering angular scales between 60_°_ and 180_°_. While the reason for the upper limit at 180_°_ is clear, the lower limit could be chosen differently, for instance at 45_°_. Moreover, the preprint and the related talk combine four of these tensions, each of which are < 3 sigma significant. Joann argued that the tensions were largely independent. She then went on to find the likelihood of all four statistics being more extreme than observed, using Monte Carlo realisations of the CMB sky in ΛCDM. If the assumption of independence is correct as she argued, this would be equivalent to multiplying the individual likelihoods. Which is not at all justified. To see why, suppose there are four independent Gaussian random variables with mean zero and unit dispersion. At least, that is the theory. Now suppose all of these are observed to have a value of 2. Each of these variables is rather high compared to the theory. The likelihood of an even higher value is only 2.275%, which is a reasonable way to compare each variable with the model (though one should do a two-tailed test and quote 4.55% as the probability of an observation that is even less likely given the model). To combine all four independent variables, one could multiply all the probabilities together, which would give 0.022754 = 2.7 × 10-7. This is well over the 5σ significance threshold, so the model must be rejected. But it is very wrong to combine probabilities in this way. What should actually be done is to note that each variable contributes a χ2 of 4, so the total is 16. Even if we only had just one single variable, a χ2 of 16 would be equivalent to a tension of only 4 sigma. With four variables, the relevant theoretical probability distribution is the χ2 distribution with four degrees of freedom, in which case an observed value of 16 implies a tension very close to 3 sigma. Applying a similar argument to the claimed CMB anomalies shows that the tension is close to but does not quite reach the 5 sigma threshold. This is assuming the tensions are independent and have Gaussian-like tails, which need not be the case. A more careful calculation would involve looking at the joint probability density in 4D space and drawing a contour through the observed point, so that the total probability outside this contour can be quantified. This would be the actual level of tension with ΛCDM. If the simplifying assumption of independence is made, the easiest way to do this is to fit an analytic function to the distribution of each variable and then set up a 4D grid using these four analytic functions rather than the distribution of points from Monte Carlo realisations. This gets around the extreme difficulty of getting enough such realisations to do a joint 4D statistical hypothesis test as Joann was trying to do. Moreover, one also has to account for the look elsewhere effect. Because of these issues, the CMB anomalies were not taken seriously by the vast majority of delegates as evidence for a breakdown of ΛCDM. Though it was argued that perhaps the anomalies indicate that the Universe is not infinite, but its closed radius of curvature would need to be not much larger than the size of our cosmic horizon.

Day 3

The third and last day of the conference opened with a talk by Mark Trodden on the early dark energy solution to the Hubble tension. I pointed out that since such solutions have the expansion rate being 9% faster than the Planck cosmology over the vast majority of cosmic history, the universe becomes about 8% younger. This translates to more than a billion years, which causes problems with the age of the universe estimated independently of cosmology (Figure 1). Mark had not calculated the age of the universe in his proposed model. His talk raised so many other problems with the whole early dark energy approach that it is unclear if he really thinks it is the right way forward. It was also the only talk to advocate an early time solution.

My overall take was that early time solutions are very much out of favour, especially given the seven major problems identified in a recent review (Vagnozzi 2023). Many of the talks at Thessaloniki focused on the role of structure and peculiar velocities in the local determination of the Hubble constant, and related observational evidence. So it seems like researchers are gradually warming to the idea that the Hubble tension is a real issue for ΛCDM and that the solution is fairly local, or at least at late times in cosmic history.

Local void teleconference

I recently organised a teleconference through the mailing list I set up to discuss the local void solution to the Hubble tension. My next boss Harry Desmond hosted it and about a dozen people attended it, with discussions covering various topics. We initially discussed the galaxy number count data, which shows clear evidence of a local supervoid that is not compatible with ΛCDM. We then discussed the consistency with BAO and supernova results, highlighting the need for further analyses of supernova data. I expect to give a talk on the BAO results on 23rd July 2024 at 1 p.m. UK time as BAOs are a lot simpler to interpret and analyse. It was also pointed out that for the local void scenario to work in detail, we would need to be located in a particular part of the void which could be argued to be special at the 2% level. This is not fine-tuned by reasonable scientific standards. No major objections were raised to the local void scenario. Slides can be shared on request, and the same applies to access details for the above talk if you are on a temporary contract or in the first few years of a permanent contract, or work outside academia.

I will briefly take this opportunity to discuss a few other concerns that have been raised elsewhere (several objections were addressed in section 5.3 of Haslbauer, Banik & Kroupa 2020). ΛCDM cannot explain a supervoid of the sort observed and required to solve the Hubble tension. But this is obvious as otherwise there would be no Hubble tension – people would expect the local measurement to stochastically scatter around the actual value to such an extent that a 9% difference either way would be totally plausible. A local void solution requires structure formation on large scales to be faster than in ΛCDM. This is not a scientific problem with the local void scenario, but it could be a sociological explanation for its unpopularity. Another major sociological reason could be that experts on the local distance ladder want to measure H0, but if there is significant cosmic variance in the local determination, their work would be much less accurate as a measure of the actual expansion rate. Instead, with very precise observations, percent-level differences with the CMB-derived H0 would largely be measuring the impact of local structure. Depending on how overinflated the egos are of the researchers involved, this could be an important sociological consideration.

A more genuine objection which sometimes comes up is that the power spectrum of galaxies on the relevant 100 Mpc scales works fine in ΛCDM, so it is not possible that density fluctuations on these scales are double what is expected in that model. The problem with this argument is that the relevant observations probe only the tip of the galaxy luminosity function. It is well known that the brightest galaxies are biased tracers of the underlying matter distribution because only its peaks can serve as hosts for such galaxies. This has led to the concept of a bias factor, which is the ratio between contrasts in the number density of the brightest galaxies and in the underlying matter distribution. Given the uncertainties of baryonic physics in ΛCDM, the bias factor can be altered at will. It is therefore chosen to match the observations. This circular logic does not favour ΛCDM, but merely clarifies how it would have to work to fit the data. An independent test of ΛCDM would require the bias factor to be determined independently of the model, which requires the dark matter to be discovered. Alternatively, one can take sufficiently deep observations covering the majority of the galaxy luminosity function, which would make the observed galaxy number density a good tracer of the total matter distribution, i.e. the bias factor would be very close to 1. This is just what was done in Keenan, Bager & Cowie (2013) when they announced the KBC void in a much clearer way than prior studies. But it is difficult to obtain similarly deep observations much further out, which would be necessary to find the typical density fluctuations on a 300 Mpc scale from studying a much larger volume. As a result of uncertainty in the bias factor, it is not known precisely how clustered matter is on a scale of 300 Mpc.

Summary

Overall, the conference at Thessaloniki and the teleconference I organised on the local void scenario highlighted several important things:

  1. There is overwhelming observational evidence for the Hubble tension, with early Universe observations other than Planck measurements of the CMB also requiring a low H0 and many many different local distance ladder techniques not involving Type Ia supernovae returning a high local redshift gradient with respect to distance. The Hubble tension can no longer be assigned to observational systematics (Riess & Breuval 2024, and references therein).

  2. The age of the Universe obtained independently of cosmology indicates that the Planck background cosmology is way closer to reality than a background cosmology calibrated to the local redshift gradient, which must therefore have been inflated in some way (Cimatti & Moresco 2023). This requires non-cosmological contributions to the redshift out to several hundred Mpc amounting to about 10% of the total redshift.

  3. Early time solutions to the Hubble tension are no longer favoured for various reasons unrelated to the above, though that is an additional very good argument (Vagnozzi 2023).

  4. There is a return to the Planck cosmology at higher redshift, which is a key prediction of any local solution to the Hubble tension (Jia et al. 2024).

  5. There is a degeneracy between a local and a late-time solution to the Hubble tension because of the finite speed of light, but nonetheless trying to explain the Hubble tension with a change to fundamental physical constants at very late times in cosmic history would be extremely fine-tuned. Distance ladders are based on various rungs, e.g. parallax distances to Cepheids, Cepheid distances to galaxies with supernovae, and finally supernovae further out to calculate the local redshift gradient. A sharp change to the gravitational constant in the somewhat narrow range of overlap between the last two rungs at about 40 Mpc (i.e., about 130 Myr ago) may solve the Hubble tension, but this is so fine-tuned that this possibility is not taken seriously (Ruchika et al. 2024).

  6. There is a lot more emphasis on local structure and peculiar velocities skewing the local redshift gradient and possibly solving the Hubble tension. This brings to mind a famous quote from Arthur Conan Doyle’s character Sherlock Holmes: “Once you eliminate the impossible, whatever remains, no matter how improbable, must be the truth.” Or to paraphrase Winston Churchill: “A local void is the worst way to solve the Hubble tension, until you consider all the other proposals.”

  7. In ΛCDM, peculiar velocities cannot plausibly affect the local redshift gradient at the 9% level and thereby solve the Hubble tension, so a local void solution would require enhanced structure growth. There is a lot of positive evidence for structure formation being more efficient than predicted in ΛCDM. In particular, anomalously fast bulk flows (Watkins et al. 2023 and Whitford et al. 2023) and evidence that the local redshift gradient depends on the direction in which you observe (Kalbouneh et al. 2023 and Hu et al. 2024) all indicate that peculiar velocities are larger than expected. The KBC void (Keenan et al. 2013 and Wong et al. 2022) and El Gordo (Asencio et al. 2021, 2023) also imply that structure forms faster than expected in ΛCDM.

It is no longer tenable to assign the Hubble tension to observational errors, new physics restricted to the early universe, or a slight adjustment to the Planck background expansion history that achieves a 9% faster expansion rate today while only marginally affecting the growth of structure. Instead, it is clear by now that structure growth needs to be significantly enhanced on large scales. We also need to solve the Hubble tension. Given the overwhelming evidence that we are in a large and deep void from studies across the whole electromagnetic spectrum, it would make most sense if outflow from this void were to solve the Hubble and bulk flow tensions.

The local void scenario is unique among solutions to the Hubble tension in that it was not originally proposed as a way to solve the Hubble tension. It also correctly predicted the bulk flow curve. Other proposed solutions were generally invented specifically to solve the Hubble tension, and often made predictions that were later falsified. While I have no doubt that other researchers will continue to come up with ideas for the Hubble tension, they should bear in mind that they need to fit much additional data besides the local redshift gradient and the CMB anisotropies in order to justify whatever extra complexity they introduce.


In The Dark Matter Crisis by Elena Asencio, Moritz Haslbauer and Pavel Kroupa. A listing of contents of all contributions is available here.

We had a recent case where a submitted comment to The Dark Matter Crisis did not appear in the system, the comment being swallowed. The user had to use a different browser to submit the comment which we then approved. In case you submit a comment and it does not appear, try another browser and/or send us an email.

(by Pavel Kroupa and Eda Gjergo)

Over the past few months, three significant results have been published, each offering critical insights into the gravitational potentials around galaxies. Each of these findings poses substantial challenges to the dark-matter-based approach to galactic dynamics and cosmology. Consequently, the implications of these results call into question the validity of the LCDM model of cosmology. The three results are on:

First test: do dark matter halos exist ? The Chandrasekhar dynamical friction test

This test is also covered in DMC91. The LCDM theory of cosmology relies on each galaxy having a massive and extended dark matter halo around it. This dark matter is needed because under Newtonian dynamics, galaxies cannot remain bound and stable over cosmic timescales. The observed gravitational binding energy is higher than what is inferred from their visible components alone.

If this LCDM theory is correct, then we know, within a certain range of uncertainty, which dark matter halo the Milky Way (MW) has, which dark matter halo the Large Magellanic Cloud (LMC) has, and which dark matter halo the Small Magellanic Cloud (SMC) has. The uncertainties on the dark matter halos lie in the exact mass, extent, and shape, but these uncertainties are known from the many simulations of structure formation that have been published. The shapes and density profiles of the dark matter halos, if they exist, can be well approximated by spherical Navarro-Frenk-White (NFW) profiles. The departures from this NFW form are small and not important for this test. At the same time, the European Gaia astrometric space mission has measured the distances and motions of the LMC and SMC relative to each other and to the MW. The LMC and SMC, the Magellanic Stream and the MW are shown in Figure 1. We can therefore calculate, using Newtonian (i.e. Einsteinian) gravitation (i.e. with the NFW dark matter halos), where each of these three came from backwards in time and how their orbits progressed in time.

This image has an empty alt attribute; its file name is screenshot-2024-07-06-at-12.05.54.png

Figure 1 The LMC and SMC and the Magellanic Stream (red tail) which is hydrogen gas drawn or pushed out of the LMC and SMC as they are passing the MW and as seen from Earth on the sky with the MW disk evident as the bluish-white haze of billions of stars in the mid-plane. The centre of the MW is at the centre of the figure. Nearby dark dust clouds in which new stars are forming are also seen. Credit: NASA .

To do this, it is necessary to compute the Chandrasekhar dynamical friction force, which is the friction experienced by a massive body, like a dwarf galaxy, moving through a field of less massive particles, as populate a dark matter halo. The LMC should experience dynamical friction in the MW and SMC dark matter halos. And similarly, the SMC should experience dynamical friction in the MW and LMC dark matter halos. If the Chandrasekhar dynamical friction were switched off, i.e. in the absence of dark matter, the system would be in perfect agreement with Newtonian gravity, but this would not allow to explain the flat rotation curves of galaxies and would also lead to the LMC and SMC racing past the MW for the first and only time as they would be unbound to the MW in this case.

Calculating the dynamical friction force which decelerates the LMC and SMC is not difficult, because Chandrasekhar dynamical friction is very well understood. It results by the myriads of tiny dark matter particles being gravitationally flung around each of the galaxies such that the galaxies slow down. It is the same physical mechanism we use to launch robots into deep space by leading them to sling shot past our planets. The planets become slightly slower, but since we only send out one robot after another and rarely so, the effect on the motions of the planets is completely negligible. In the case of the LMC and SMC moving past the MW, the effect is however extremely important because of the very large number of the dark matter particles involved. The nice thing about this test which makes it so powerful and robust is that Chandrasekhar dynamical friction does not depend on the mass of the dark matter particle, it only depends on the mass density of the dark matter halo. And this is fixed by the theory and the requirement that it also fit the CMB power spectrum.

The publication “The Relevance of Dynamical Friction for the MW/LMC/SMC Triple System” (Oehm & Kroupa 2024) tells us that the triple system of galaxies (MW/LMC/SMC) cannot be in the observed configuration in the presence of Chandrasekhar dynamical friction: in order for the LMC to be today at about 50kpc from us and moving past the MW at about 400km/s and for the SMC to be at a distance of 60kpc from us and about 20kpc from the LMC and moving past the LMC at about 100km/s there can be no dark matter halo. The triple system is too compact and fast and allows no prior orbital history within the 5 sigma range of uncertainties of the observed parameters (masses, distances and velocities).

A solution for the past orbital history of the LMC and SMC must include them having had a close encounter (coming close to each other within at least 20 kpc) at least once in the time interval 1-4 Gyr ago, in order to create the Magellanic Stream (we refer to this as the condition COND). That the LMC and SMC have been interacting vehemently even very recently can be seen in the bridge of gas (the Magellanic Bridge) observed to be between them. In searching for orbital solutions with the condition COND we are being extremely conservative and careful, and the search-algorithms applied (a genetic algorithm and a Markov-Chain Monte-Carlo method) yield the same orbital solutions, given the constraints on the present-day position and velocity vectors and masses of the MW, LMC and SMC, and the condition COND. It is important that two independent search methods are used, as these allow the orbital solutions to be checked for consistency. Neither algorithm finds a solution, and the best solutions that are found with the genetic algorithm and the Markov-Chain Monte-Carlo method agree next-to-perfectly with each other and have a likelihood corresponding to a confidence smaller than 6 sigma, given the data. It is completely impossible to fulfil all the constraints, and the solutions that are found have a likelihood of less than 6 sigma.

But the situation is far worse than this: Instead of having had only one encounter 1-4 Gyr ago, the LMC and SMC appear to have had 4 encounters in the past 3 Gyr! This is seen in the LMC and SMC having synchronised star-formation histories (Figure 1a taken from Massana et al. 2022): the LMC and SMC have been dancing about each other and each time they came close to each other there was an increase in their star-formation activity: the star-formation rate, SFR, increases (SFR is given in the mass in stars forming per unit time and per unit volume). This increase happened four times, and probably many more times more than 3 Gyr ago because the age-measurements of the stars become very uncertain for stars that are older than about a Gyr. Therefore, the complete exclusion of orbital solution with the condition COND also tells us that orbital solutions with four close encounters in the past 3 Gyr are even more impossible: the LMC and SMC merge to one galaxy after 1-2 encounters only and cannot exist as two separate galaxies that are still orbiting around each other today if there is dark matter!

The publication by Oehm & Kroupa (2024) also documents that the MW/LMC/SMC triple system of galaxies works perfectly fine if the Chandrasekhar dynamical friction is switched off, but only if the gravitational forces between the galaxies remain as large as they are with dark matter halos. This means that we need a theory of gravitation that has no dark matter particles (in order for there to be no Chandrasekhar dynamical friction) but which guarantees the large gravitational forces of attraction needed to keep galaxies bound in general and specifically the LMC and SMC (and all the other satellite galaxies of the MW) orbiting around the MW. This can only be done with a theory of gravitation which is effectively exactly like MOND, i.e. which keeps a stronger gravitational pull between the galaxies without dynamical friction. Note that other systems of galaxies on which the Chandrasekhar dynamical friction test has been applied yield the same conclusion (see DMC91.)

The application of the Chandrasekhar dynamical friction test thus falsifies the existence of dark matter halos around galaxies with much more than 5 sigma confidence: dark matter does not exist. MOND solutions to the orbits of the LMC and SMC however are obtained easily.

Second test: the extent of gravitational potentials around galaxies are Milgromian

If dark matter halos do not exist (as shown above), gravity in galactic outskirts should be Newtonian. Instead, they exhibit flat rotation curves out to where gas can still be measured, which is typically two dozen kpc from the centre of a major disk galaxy like the MW. A way to describe these systems accurately is therefore to modify gravity in the small gravitational gradients, and to strengthen it as described by Milgromian gravitation, i.e. MOND. As already shown here (Kroupa et al. 2022, their Sec.3.1), MOND is equivalent to obtaining the gravitational potential from the observed matter distribution by solving the generalised Poisson equation based on the p-Laplacian where p=2 give the Newtonian Poisson equation while p=3 gives us the Milgromian Poisson equation. Solving for the gravitational potential in the case of p=3 gives logarithmic potentials (rather than the Keplerian fall-off for p=2). That is, the circular velocity, Vc(R), around an isolated galaxy will remain constant to indefinite distances, R, for isolated disk and elliptical galaxies in the MOND case (yes: the rotation curves, i.e. Vc(R), of galaxies are flat in MOND, Vc does not decrease with distance, R, from the centre of the galaxy!).

In the publication “Indefinitely Flat Circular Velocities and the Baryonic Tully–Fisher Relation from Weak Lensing” (Mistele et al. 2024) the authors used the weak-lensing method to constrain the true gravitational potentials around isolated late-type (disk) and isolated early-type (elliptical) galaxies. All galaxies show the logarithmic potential predicted by MOND out to a Mpc! The (falsified – see above) dark matter theory predicts a decline of the rotation curve. This decline is totally ruled out by these new data. See Figure 2 which is taken from the paper by Mistele et al (2024)

Galaxies are thus now known to generate a gravitational potential around themselves which is Milgromian and not Newtonian. Newtonian dynamics is invalid not only in strong gravitational potentials, as described by Einsteinian gravity, but also in the weakest gravitational potentials, where MOND better captures the observed phenomenology.

Third test: the tidal tails of open star clusters falsify Newtonian gravitation and behave exactly as predicted in Milgromian gravitation

If the above two results are correct, then Newtonian gravitation is ruled out to be correct (it remains an excellent approximation in the Solar system), and Milgromian dynamics should be the correct one to use when wanting to understand how gravity works in those regions where its spatial rate of change is very small. This is entirely verified by the third test: Six independently working teams have measured the tidal tails around four open star clusters.

Open star clusters are particularly suited for this test because they are on near-circular orbits around the Galaxy. In each open star cluster the stars orbit chaotically around each other and by exchanging small amounts of energy through the many weak gravitational tugs some of the stars successively gain energy so that they become unbound to the cluster. They then drift around the cluster and find one of the two Lagrange points (L1 or L2) through which they irrevocably escape from the open cluster. In Newtonian gravitation and at the distance from the centre of the Galaxy similar to our Sun’s, these two Lagrange points are symmetrically spaced relative to the open cluster and it is pure chance which of the two a star escapes through. This is explicitly shown by Pflamm-Altenburg et al. (2023). The tidal tails that develop ahead and behind an open star cluster as their own stars evaporate out of the open cluster (every open cluster dissolves into the Galaxy over time) are thus easy to calculate, and there must be the same number of stars in the leading and in the trailing tidal tails if Newtonian gravitation is correct. But the observational result by the six teams inform us that each of the four open star clusters has more stars ahead of it than behind it. These six teams did not (dare to?) note or even discuss the implications of their results.

In the publication “Open star clusters and their asymmetrical tidal tails” (Kroupa et al. 2024) these measurements are combined and in every case the open cluster has more stars ahead of it than behind it. The Newtonian prediction is that there should be the same number ahead as there is behind (to within statistical scatter) some few to tens of pc of the open star cluster. But the data provide an 8 sigma falsification of this Newtonian symmetry. Instead, the observed asymmetry (more stars in front than behind the cluster) is exactly as expected if MOND is correct. In Milgromian gravitation, the two Lagrange points L1 and L2 are not symmetrically placed relative to the open star cluster and the inner (L1) Lagrange point provides each star a larger chance to escape than the outwards/backwards (L2) one. Figure 3 (taken from Kroupa et al. 2024) shows the observational data: we see each of the four clusters looking down on them from the North Galactic Pole onto the X,Y plane of the Galactic disk with X pointing towards the Galactic centre and Y in the direction of Galactic rotation.

Escaping stars from open star clusters inform us with extreme confidence that the gravitational potential is not Newtonian. The data inform us also that Milgromian gravitation describes the observations correctly.

Sumary:

Each of the above tests individually rules out that Newtonian dynamics plus dark matter is relevant for this Universe. The confidence of these falsifications of the LCDM model are each at more than 5 sigma confidence. Note also the logical consistency of the tests: If dark matter is ruled out to exist then gravitation must be essentially Milgromian to account for the stronger pull needed to keep galaxies together (first test). This stronger gravitation is indeed observed through the weak lensing analysis which tells us that the gravitational potentials around galaxies are indeed Milgromian (second test). The tidal tails of open star clusters verify this to be the case with extremely high confidence — it would be illogical for the tidal tails to be Newton-symmetric if the first and second test tell us that Newtonian gravitation is incorrect while showing the data to be consistent with Milgromian dynamics. The third test also tells us that Newtonian gravitation is replaced by Milgromian gravitation on pc scales and larger, while the previously applied wide-binary star test tells us this to be the case already on the scale of a few thousand astronomical units (Hernandez et al. 2019, Chae 2023, Hernandez et al. 2024, Chae 2024, with a discussion of the results by Indranil Banik et al. 2024 provided by Hernandez et al. 2023).

The completely disastrous track-record of the LCDM model of cosmology in accounting for observational data prior to 2023 is well documented by Banik & Zhao (2022) and Kroupa, Gjergo et al. (2023). Contrast this to the claim by Peebles (2024) that “_The standard LambdaCDM cosmology passes demanding tests that establish it as a good approximation to reality_“. It seems there is a cognitive dissonance among some people working on cosmology.

That the so-falsified LCDM theory matches some other data, such as the cosmic microwave background (CMB), is irrelevant. The LCDM model makes neither physical nor mathematical sense without dark matter. It also does not matter who accepts this situation: those still believing the LCDM model (or its variants with warm dark matter or with self-interacting dark matter or with fuzzy dark matter) is relevant, will simply be wasting more of their own time and precious research money (payed by the taxpayer) on research that is quite irrelevant.

Taken together, the above three tests strongly challenge the dark matter paradigm, indicating it cannot capture the dynamics of the Universe. Continuing to support dark matter is not scientifically justified.


In The Dark Matter Crisis by Elena Asencio and Pavel Kroupa. A listing of contents of all contributions is available here.

We had a recent case where a submitted comment to The Dark Matter Crisis did not appear in the system, the comment being swallowed. The user had to use a different browser to submit the comment which we then approved. In case you submit a comment and it does not appear, try another browser and/or send us an email.