This "blog post" appears to just be copy-pasted content from the NASA article [1]. I give credit for the source being cited, but it's still plagiarism.
It's a bit of an edge case. It makes a good point and it uses text from a credible source. AFAIK everything NASA publishes is royalty free and can just be copied.
One additional sentence between the image and the content like this and it would probably be fine:
"The explanation from OpenAI has some major flaws, here is how this NASA source explains it:"
> Plagiarism: the practice of taking someone else's work or ideas and passing them off as one's own.
"passing them off as one's own" is the key part. To prevent this, you make it very clear which parts are your own ideas and which parts are not. If you compare the source to this post, you'll see it's a mix, without delineation.
> This theory also does not explain how airplanes can fly upside-down (the longer path would then be on the bottom!) which happens often at air shows and in air-to-air combat.
While true, the person writing this article does not seem to understand the difference between flying inverted and flying with a negative angle of attack. These can happen at the same time, but not necessarily. If you're performing a loop or a barrel roll, you will be inverted, but the aircraft will be performing largely as it would be when you are straight and level, because you are still under positive g with a positive AOA on the aircraft. The lift vector will just be pointed someplace other than "up."
To me the whole demo [edit: today's openai live stream] didn't feel revolutionary at all.
Especially the code generation part. It feels to me like Claude Web can do those illustration artifacts already for months equally well.
Also the example in Cursor just felt like a regular Claude Code session, just with different UI.
The only part I'm excited about is, that there is no distinction between reasoning and non-reasoning models anymore. I tend to default to reasoning models, because too often I feel like I need to switch mid-conversation to a reasoning model anyway. And reasoning models degraded the user experience drastically, because it often takes them quite some time to start responding.
This is the problem with LLMs, they return common knowledge as fact.
Interesting that will all the Ph.D. expert fine-tuning that GPT5 supposedly received, it still doesn't favor the more correct Newtonian explanation of airplane lift.
This is a pet peeve of mine and I'm glad to see it called out. That said, I haven't seen a comprehensive discussion of "here's the different factors that we think contribute to creating lift" for the general public, is anyone aware of a good source?
In humans and pilots defense - Most airline pilots do not claim they have PhD level intelligence (whatever that means), as OpenAI/sama hyped frequently about gpt-5 in the preceding months.
Indeed, this claim is at the very top of the announcement:
> GPT‑5 is smarter across the board, providing more useful responses across math, science, finance, law, and more. It's like having a team of experts on call for whatever you want to know.
Placing the blame on the LLM is skirting the real issue, which is that these companies are trying to upend society by constructing a new reliance on these LLMs. If the hype around the AI space wasn't here, then there would be fewer people accepting these tools as some all-knowing machine tantamount to a god.
This "blog post" appears to just be copy-pasted content from the NASA article [1]. I give credit for the source being cited, but it's still plagiarism.
[1] https://www.grc.nasa.gov/www/k-12/VirtualAero/BottleRocket/a...
It's a bit of an edge case. It makes a good point and it uses text from a credible source. AFAIK everything NASA publishes is royalty free and can just be copied.
One additional sentence between the image and the content like this and it would probably be fine:
"The explanation from OpenAI has some major flaws, here is how this NASA source explains it:"
ah that's why there is no "java applet"
To be fair, the blog has a "Source: NASA" link near the beginning.
> Plagiarism: the practice of taking someone else's work or ideas and passing them off as one's own.
"passing them off as one's own" is the key part. To prevent this, you make it very clear which parts are your own ideas and which parts are not. If you compare the source to this post, you'll see it's a mix, without delineation.
Thanks for calling this out. Yes, agree. I should revise my understanding of plagiarism.
"The theory (from GPT-5) is one of the most widely circulated, incorrect explanations."
Naturally. This is how LLMs work. It regurgitates the data fed into it.
A demonstration of the Bernoulli effect by the Flying Bernoulli Brothers.
https://www.youtube.com/watch?v=1GAp2dlIC8I
> This theory also does not explain how airplanes can fly upside-down (the longer path would then be on the bottom!) which happens often at air shows and in air-to-air combat.
While true, the person writing this article does not seem to understand the difference between flying inverted and flying with a negative angle of attack. These can happen at the same time, but not necessarily. If you're performing a loop or a barrel roll, you will be inverted, but the aircraft will be performing largely as it would be when you are straight and level, because you are still under positive g with a positive AOA on the aircraft. The lift vector will just be pointed someplace other than "up."
This just seems to try to increase the author's visibility by referring to GPT-5.
To me the whole demo [edit: today's openai live stream] didn't feel revolutionary at all.
Especially the code generation part. It feels to me like Claude Web can do those illustration artifacts already for months equally well.
Also the example in Cursor just felt like a regular Claude Code session, just with different UI.
The only part I'm excited about is, that there is no distinction between reasoning and non-reasoning models anymore. I tend to default to reasoning models, because too often I feel like I need to switch mid-conversation to a reasoning model anyway. And reasoning models degraded the user experience drastically, because it often takes them quite some time to start responding.
This is the problem with LLMs, they return common knowledge as fact.
Interesting that will all the Ph.D. expert fine-tuning that GPT5 supposedly received, it still doesn't favor the more correct Newtonian explanation of airplane lift.
we can be wrong so much faster.
This is a pet peeve of mine and I'm glad to see it called out. That said, I haven't seen a comprehensive discussion of "here's the different factors that we think contribute to creating lift" for the general public, is anyone aware of a good source?
Reminds me of when Bard also quoted something NASA put out that was also incorrect.
In the LLMs defense - most airline pilots think this is how things work, as well.
In humans and pilots defense - Most airline pilots do not claim they have PhD level intelligence (whatever that means), as OpenAI/sama hyped frequently about gpt-5 in the preceding months.
Indeed, this claim is at the very top of the announcement:
no but they should be expected to know better during their commercial license oral exam. (speaking as a pilot)
Placing the blame on the LLM is skirting the real issue, which is that these companies are trying to upend society by constructing a new reliance on these LLMs. If the hype around the AI space wasn't here, then there would be fewer people accepting these tools as some all-knowing machine tantamount to a god.
You do not need to be an aerodynamicist or aerospace engineer to be a pilot. Not every pilot is a test pilot.
indeed, my physics education only goes upto age 18, and this was my first thought watching the presentation
then it then went away and generated a load of confidently incorrect total bullshit
"phd level" my backside
I liked the "avid Wikipedia reader on ketamine" characterization more.