The most powerful part of your landing here is that distraction is going to increase, it's not going away. For those leaders who aren't self aware and have no one to tell them what their blind spots are, they're going to be left out of the game of success. Today, success is a game of devotion, attention and ACTION. LOVE the idea of engaging people (neighbors) to find out real data. Asking questions is a great start. In order to learn from the answers we get (to those questions) we need to be engaged in richer conversations. This election sucked the air from rich conversations. My prayer is that we've learned from this election to reinvent how we engage with the abundance of information we have to make more progress together.
I liked the article. But I’m amazed that I keep seeing people not understand the difference between the percentage on a betting market and the percentages on the results of polls. Polls measure the percentage of people that pretend to vote for candidate x or y, prediction markets percentage indicate the chance of a candidate winning the election. The prediction markets being 55% to 45% doesn’t mean that according to them Trump would get 55% of the vote, it means that according to them Trump had a 55% chance of winning.
Also, just because Trump won and Nate Silver had his chances at 49% while Prediction markets had it at 55% tells you nothing about who was right or wrong solely based on one event.
Enjoyed your piece here, but the polls weren’t “wrong” and the prediction markets weren’t “right”.
When we say polls or predictions are "right" or "wrong," we need to distinguish between point estimates and probability distributions. Polls typically provide a point estimate with a margin of error, which describes a confidence interval. If the actual result falls within this interval, the poll was statistically valid, even if the central estimate wasn't exact. This was mostly the case this year! Too close to call really is too close to call.
For probabilistic forecasts like Silver's 50% or prediction markets' 60%, we can't evaluate accuracy on a single event using simple right/wrong criteria. A 60% probability forecast that comes true isn't necessarily "more right" than a 50% forecast - both are expressing uncertainty rather than making definitive predictions.
However, we can assess relative calibration: If similar events occur repeatedly, a well-calibrated 60% probability should be correct about 60% of the time. This is why prediction markets' track record across multiple events is more meaningful than their accuracy on any single event.
Agreed on this, and considering that our prediction market election sample is n = 1, we definitely need more data to assess their accuracy. The main point I wanted to highlight was the French trader's ability to cut through the noise on the whole thing and find alpha that could be exploited to his benefit.
Excellent article!! We live in a noisy, polarizing. and divisive world, so your conclusions are spot on. But even with the neighbor effect, I do question based on the numbers, how many would remain reluctant to share who they are voting for if their true answer would cost a relationship. There will always be some level of error in trying to predict the mind and heart of a human.
In my short stint working for the LATimes Poll during the 1960s I kept hearing that “the poll does not predict the election results it takes a snapshot of the public’s opinions.” After viewing the interviewers at work, I doubt the accuracy of even that. Despite their training, they asked leading questions and made other rookie mistakes. Like a lot of failing newspapers, the LAT is a shadow of its former self. Sad, I think the local newspapers provided a necessary function that is different than what social media provides even if their polls were biased.
I listened to a Dan Mcmurtrie podcast a while back and he said something along the lines of "your ability to thrive now is dependent on your ability to ignore" (hopefully I'm not butchering and misconstruing). Not your ability to filter. I think distillation and filtering only goes so far because before you know it, you are distilling and filtering all the time. The abundance of info is unfathomable at this point. I find it really difficult to "keep up".
I've seen on X that a company named AtlasIntel actually had excellent results in both 2024 and 2020. It may be different methodological approaches that may compromise the results.
Excellent points were made to help me understand why the polls were wrong. Thank you. (Minor quibble on point#2: Teddy Roosevelt ran, unsuccessfully, for a nonconsecutive presidential term.)
Sounds like by remaining anonymous from his friends and family Théo is the living proof of his own neighbor hypothesis lol
The most powerful part of your landing here is that distraction is going to increase, it's not going away. For those leaders who aren't self aware and have no one to tell them what their blind spots are, they're going to be left out of the game of success. Today, success is a game of devotion, attention and ACTION. LOVE the idea of engaging people (neighbors) to find out real data. Asking questions is a great start. In order to learn from the answers we get (to those questions) we need to be engaged in richer conversations. This election sucked the air from rich conversations. My prayer is that we've learned from this election to reinvent how we engage with the abundance of information we have to make more progress together.
I liked the article. But I’m amazed that I keep seeing people not understand the difference between the percentage on a betting market and the percentages on the results of polls. Polls measure the percentage of people that pretend to vote for candidate x or y, prediction markets percentage indicate the chance of a candidate winning the election. The prediction markets being 55% to 45% doesn’t mean that according to them Trump would get 55% of the vote, it means that according to them Trump had a 55% chance of winning.
Also, just because Trump won and Nate Silver had his chances at 49% while Prediction markets had it at 55% tells you nothing about who was right or wrong solely based on one event.
Grover Cleveland biographers must be pissed how they have to update their books
Enjoyed your piece here, but the polls weren’t “wrong” and the prediction markets weren’t “right”.
When we say polls or predictions are "right" or "wrong," we need to distinguish between point estimates and probability distributions. Polls typically provide a point estimate with a margin of error, which describes a confidence interval. If the actual result falls within this interval, the poll was statistically valid, even if the central estimate wasn't exact. This was mostly the case this year! Too close to call really is too close to call.
For probabilistic forecasts like Silver's 50% or prediction markets' 60%, we can't evaluate accuracy on a single event using simple right/wrong criteria. A 60% probability forecast that comes true isn't necessarily "more right" than a 50% forecast - both are expressing uncertainty rather than making definitive predictions.
However, we can assess relative calibration: If similar events occur repeatedly, a well-calibrated 60% probability should be correct about 60% of the time. This is why prediction markets' track record across multiple events is more meaningful than their accuracy on any single event.
Agreed on this, and considering that our prediction market election sample is n = 1, we definitely need more data to assess their accuracy. The main point I wanted to highlight was the French trader's ability to cut through the noise on the whole thing and find alpha that could be exploited to his benefit.
Excellent article!! We live in a noisy, polarizing. and divisive world, so your conclusions are spot on. But even with the neighbor effect, I do question based on the numbers, how many would remain reluctant to share who they are voting for if their true answer would cost a relationship. There will always be some level of error in trying to predict the mind and heart of a human.
In my short stint working for the LATimes Poll during the 1960s I kept hearing that “the poll does not predict the election results it takes a snapshot of the public’s opinions.” After viewing the interviewers at work, I doubt the accuracy of even that. Despite their training, they asked leading questions and made other rookie mistakes. Like a lot of failing newspapers, the LAT is a shadow of its former self. Sad, I think the local newspapers provided a necessary function that is different than what social media provides even if their polls were biased.
Another great piece. As a Canadian with our own upcoming election this is very interesting.
I listened to a Dan Mcmurtrie podcast a while back and he said something along the lines of "your ability to thrive now is dependent on your ability to ignore" (hopefully I'm not butchering and misconstruing). Not your ability to filter. I think distillation and filtering only goes so far because before you know it, you are distilling and filtering all the time. The abundance of info is unfathomable at this point. I find it really difficult to "keep up".
Anyhow, very much looking forward to your book!
I've seen on X that a company named AtlasIntel actually had excellent results in both 2024 and 2020. It may be different methodological approaches that may compromise the results.
Fascinating. I can’t even imagine how these things will evolve between now and the next presidential election.
Excellent.
whenever you need an executive assistant
Excellent points were made to help me understand why the polls were wrong. Thank you. (Minor quibble on point#2: Teddy Roosevelt ran, unsuccessfully, for a nonconsecutive presidential term.)
Fair point. I meant the only other time we'd had a president successfully win a nonconsecutive election.