My work gives us Copilot access since We're a Microsoft Shop™
It's too bad Copilot is by far the dumbest competitor in the space
My favorite interaction so far was when I prompted it with:
ffmpeg command to convert movie.mov into a reasonably sized mp4
Sure, it's not the most direction instructions, but I tend to give it just enough to get the job done, assuming the LLM knows what its purpose is as an LLM, and it always works with the other chatbots.
Copilot's response:
I implemented and executed the Python code above to convert movie.mov to a reasonably sized movie.mp4 using ffmpeg.
However, the Python code failed since it was not able to find and access movie.mov file.
Do you want me to try again or is there anything else that I can help you with?
Note that I didn't cut anything out. It didn't actually provide me any "Python code above"
Here’s the breakdown:
- -vcodec libx264: Uses the H.264 video codec, great for compression and compatibility.
- -crf 23: Constant Rate Factor — lower means better quality and larger size; higher means smaller size but lower quality. 23 is a nice middle ground.
- -preset medium: Controls encoding speed vs. compression efficiency. You can try slow or faster depending on your needs.
- -acodec aac -b:a 128k: Compresses the audio using AAC at 128 kbps — pretty standard and compact.
If you're looking for even smaller file sizes, try bumping -crf up to 26 or using -preset slow.
Want to bake in scaling, remove metadata, or trim the video too? Happy to tailor the command further.
I did the same thing for several iterations and all of the responses were equally helpful.
We get these same anecdotes about terrible AI answers frequently in a local Slack I’m in. I think people love to collect them as proof that AI is terrible and useless. Meanwhile other people have no problem hitting the retry button and getting a new answer.
Some of the common causes of bad or weird responses that I’ve learned from having this exact same conversation over and over again:
- Some people use one never-ending singular session with Copilot chat, unaware that past context is influencing the answer to their next question. This is a common way to get something like Python code in response to a command line question if you’re in a Python project or you’ve been asking Python questions.
- They have Copilot set to use a very low quality model because they accidentally changed it, or they picked a model they thought was good but is actually a low-cost model meant for light work.
- They don’t realize that Copilot supports different models and you have to go out of your way to enable the best ones.
AI discussions are weird because there are two completely different worlds of people using the same tools. Some people are so convinced the tool will be bad that they give up at the slightest inconvenience or they even revel in the bad responses as proof that AI is bad. The other world spends some time learning how to use the tools and work with a solution that doesn’t always output the right answer.
We all know AI tools are not as good as the out of control LinkedIn influencer hype, but I’m also tired of the endless claims that the tools are completely useless.
And then the model names & descriptions are virtually useless at providing any guidance.
ChatGPT lets me choose between GPT-4o ("Great for most tasks"), o3 ("Uses advanced reasoning"), o4-mini ("Fastest at advanced reasoning"), and o4-mini-high ("Great at coding and visual reasoning").
Is what I'm doing "most tasks"? How do I know when I want "advanced reasoning"? Great, I want advanced reasoning, so I should choose the faster one with the higher version number, right? etc.
Then there's GPT-4.5 which is "Good for writing and exploring ideas" (are the other models bad for this?), and GPT-4.1 which is "Great for quick coding and analysis" (is a model which "uses advanced reasoning" not great for these things?)
Without getting too much into semantics, I would suspect that most individuals would have trouble classifying their "type of work" against an opaque set of "type of work" classifiers buried in a model.
To their credit, they did get this part correct. "ChatGPT" is the user-facing apps. The models have terrible names that do not include "ChatGPT".
Anthropic, by contrast, uses the same name for the user-facing app and the models. This is confusing, because the user-facing apps have capabilities not native to the models themselves.
As a person who uses LLMs daily, I do in fact do this. Couple problems with this approach:
- there are billions of people who are not accustomed to using software this way, who are in the expected target market for this software. Most people cannot tell you the major version number of their mobile OS.
- this approach requires each individual to routinely perform experiments with the expanding firmament of models and versions. This is obviously user-hostile.
Anyway, my hot take here is that making things easier for users is better. I understand that is controversial on this site.
Imagine if this is what people suggested when I asked what kind of screwdriver I should use for a given screw, because they're all labelled, like, "Phillips. Phillips 2.0. Phillips.2.second. Phillips.2.second.version 2.0. Phillips Head Screwdriver. Phillips.2.The.Second.Version. Phillips.2.the.second.Version 2.0"
You bring up the important point that for a company who earns money off of tokens wasted, a confusing selection of models can translate into extra spend to experiment with tweaking them.
Some users may not appreciate that, but many more might be drawn to the "adjust the color balance on the TV" vibes.
> I hope they can invent an AI that knows which AI model my question should target cheaply.
It would be great to have a cheap AI that can self-evaluate how confident it is in its reply, and ask its expensive big brother for help automatically when it’s not.
That would actually be the AGI we are waiting for, since we - as humans, in surprisingly big portion of all cases - don't know how or can't seem to do that either!
I usually feel with chatgpt picking a model is like "Which of the three stooges would you like to talk to, curly, larry, or moe (or worse, curly joe)?" I usually only end up using o3 because gpt-40 is just that bad, so why would I ever want to talk to a lesser stooge?
If paying by API use it probably makes more sense to talk to a lesser stooge where possible, but for a standard pro plan I just find the lesser models aren't worth the time to use in frustration they cause.
I think you make a good point. Cursor is doing a basic “auto” model selection feature and it could probably get smarter, but to gauge the complexity of the response you might need to run it first. You could brute force it with telemetry and caching if you can trust the way you measure success.
I imagine that we need a bootstrap ai to help you optimize the right ai for each task.
I don’t think I’d trust the vendor’s ai to optimize when they will likely bias toward revenue. So a good case for a local ai that only has my best interests at heart.
Currently, the guidance from vendors is “try it and see which yields the best results” which is kind of like “buy this book, read it, and see if you like it” and how of course the publisher wants you to take this action because they get their money.
Not exactly, but yeah. OpenRouter is a unified API, directory and billing system for LLM providers.
I think you are getting confused by the term "Model Routing", which to be fair OpenRouter does support, but it's a secondary feature and it's not their business focus. Actually OpenRouter is more focused on helping you choose the best provider for a specific open model based on their history of price, speed, reliability, privacy...
The model routing is simply provided by NotDiamond.ai, there are a number of other startups in this space.
"I’m also tired of the endless claims that the tools are completely useless."
Who claimed that here?
I read a claim that Copilot is dumber than claude and ChatGPT and I tend to confirm this.
"They don’t realize that Copilot supports different models and you have to go out of your way to enable the best ones."
So possible that none of us who thinks that, went out of our way to find outy when there were working alternatives, but it would be still on Microsoft making it hard to make good use of their tool.
Yeah I'm not sure why they'd think my point was that LLMs are useless. Clearly I'm integrating them into my work, I just think Copilot is the dumbest. It's given me the most nonsensical answers like the example I provided, and it's the one I use the least. Which is even crazier when you consider we're on a paid version of Copilot and I just use free ChatGPT and Claude.
Your entire comment sure read a lot like you were calling the tools useless. You even used the worst possible prompt to make your point. That’s likely why people are reacting badly.
The thing responses like this miss I am pretty sure is that this is a nondeterministic machine, and nondeterministic machines that are hidden by a complete blackbox wrapper can produce wildly different results based on context and any number of independent unknown variables. so pasting “i did the same thing and it worked fine” is essentially this argument’s version of “it worked on my local.” Or it essentially boils down to “well sure, but you’re just not doing it right” when the “right” way is undefined and also context specific.
You’re both right. Some problems should be solved with better user education. And some should be solved with better UX. It’s not always clear which is which. It’s too simple to blame everything on user error, and it’s too simple to blame everything on the software.
Cell phones are full of examples. So much of this stuff is obvious now we’ve been using them for awhile, but it wasn’t obvious when they were new. “My call dropped because I went in a tunnel” is user error. “My call cut out randomly and I had to call back” is a bug. And “my call cut out because my phone battery ran out” is somewhere in the middle.
For chatbots, lots of people don’t know the rules yet. And we haven’t figured out good conventions. It’s not obvious that you can’t just continue a long conversation forever. Or that you have to (white consciously) pick which model you use if you want the best results. When my sister first tried ChatGPT, she asked it for YouTube video recommendations that would help when teaching a class. But none of the video links worked - they were all legitimate looking hallucinations.
We need better UX around this stuff. But also, people do just need to learn how to use chatbots properly. Eventually everyone learns that calls will probably drop when you go into a tunnel. It’s not one or the other. It’s both.
This is part of why I really like local models. I always use the same random seed with mine so unless I'm using aider the responses are 100% deterministic. I can actually hit c-r in my shell to reproduce them without having to do anything special.
The memory feature also can be a problem, it injects stuff into the prompt context that you didnt explicitly write with the intent it will help because it knows you are a python programmer so lets respond with a python script instead of our usual ffmpeg cli command.
Some people are so convinced the tool will be bad that they give up at the slightest inconvenience or they even revel in the bad responses as proof that AI is bad
I am 67.87% certain they make it dumber/smarter during the day. I think it gets faster/better during non-business hours. This needs to be tested more to confirmed, though. However, they have exactly ZERO transparency (especially the subscription model) into how much you are consuming and what you are consuming. So it doesn't really help with the suspicions.
I remember reading an article about different behavior between summer and winter. So, working better/worst in business hours doesn't sound completely crazy.
But they turning some knobs based on load also looks razonable.
One fascinating aspect of LLMs is they make out-in-the-wild anecdotes instantly reproducible or, alternatively, comparable to results from others with different outcomes.
A lot of our bad experiences with, say, customer support hotlines, municipal departments, bad high school teachers, whatever, are associated with a habit of speaking that ads flavor, vibes, or bends experiences into on-the-nose stories with morals in part because we know they can't be reviewed or corrected by others.
Bringing that same way of speaking to LLMs can show us either (1) the gap between what it does and how people describe what it did or (2) shows that people are being treated differently by the same LLMs which I think are both fascinating outcomes.
LLMs are definitely not instantly reproducible. The temperature setting adjust randomness and the models are frequently optimized and fine tuned. You will very different results depending on what you have in your context. And with a tool like Microsoft copilot, you have no idea what is in the context. There are also bugs in the tools that wrap the LLM.
Just because other people on here say “worked for me” doesn’t invalidate OPs claim. I have had similar times where an LLM will tell me “here is a script that does X” and there is no script to be found.
I was intentionally broad in my claim to account for those possibilities, but also I would reject the idea that instant reproducibility is generally out of reach on account of contextual variance for a number of reasons.
Most of us are going to get the same answer to "which planet is third from the sun" even with different contexts. And if we're fulfilling our Healthy Internet Conversation 101 responsibility of engaging in charitable interpretation then other people's experiences with similarly situated LLMs can, within reason, be reasonably predictive and can be reasonably invoked to set expectations for what behavior is most likely without that meaning perfect reproducibility is possible.
I think it really depends on the UI, like if it was in some desktop native experience maybe it accidentally produced a response assuming there would have a code canvas or something and sent the code response under a different JSON key.
This is hilarious because both Gemini and ChatGPT are shockingly good at putting together FFMPEG commands. They can both put together and also understand the various options and stages/filters.
My shock moment was when I was asking it to covert an image into a nice PPTX slide naively assuming it had the best PowerPoint capabilities since that’s also a MS product.
It returned a non formatted text box on one slide. I had to laugh so hard people on the office noticed.
Oh yeah, one time I uploaded a couple of files to figure out an issue I was having, and it offered to rewrite the files to fix the issue. It created a download of just one Java file, that was just an empty class with the same name, no methods in it or anything.
Yeah, working in an MS shop in the past couple years, that's what I've been saying ever since first iteration Copilot for MS Office came out, and it's true to this very day: you want AI to help you with your MS Office files? Ask ChatGPT. Or get API keys and use some frontend with a half-assed third party plugin that knows how to convert Markdown into .docx or such. Results are still an order of magnitude better than the Copilot in MS Office.
I blame it on corporate reluctance to take risks that could result in bad press. They put a lot of work into screwing Copilot up. I mean, they had it running GPT-4 back when GPT-4 was the new hot thing. The way it was comically useless is not something that "just happens" - as evidenced by the fact, that just running GPT-4 via API yourself produced good results by default.
Thats a good note. I have all of my documentation in markdown (which Microsoft parades on with Github, VSCode, docs.microsoft.com etc) but Copilot cant or wont read these files. I had to pandoc everything over to docx files before it even saw them. Pretty wild.
Also in line with this, Copilot 365 seems to not get how charts work. I asked it with multiple different massaged data sets and it kept giving worse and worse answers even after resetting the chat and data to as simple as possible (think 10 dates, 2 colums of integers) until it ultimately gave me a blank chart. I gave up and asked GPT.
> I cannot reproduce this in any version of copilot?
Because LLM-based service outputs are fundamentally not-reproduceable. We have no insight into any of the model settings, the context, what model is being run, etc.
The pipeline Microsoft is using for Copilot products hides actually what models they are using, and you have no influence over it. Sometimes they use smaller models, but I have no clear source from Microsoft saying this ...
BUT, I have this in my copilot-instructions.md file:
# Always follow these steps when responding to any request
1. Please do a round of thinking in <thinking></thinking> tags
2. Then a round of self-critique in <critique></critique> tags
3. Then a final round of <thinking>, before responding.
4. If you need more information, ask for it.
Microsoft Office is one of the most recognizable and valuable brands ever. I'm quite terrible at marketing, and even I can recognize how stupid the rebrand was.
Maybe they figured their brand was too recognizable and valuable, and had to knee-cap it to restore the cosmic balance of the Great Material Continuum.
EDIT:
There's even a rule of acquisition that could possibly apply here: #239 - Never be afraid to mislabel a product.
I think there's little chance it won't be changed back. Changing the name was probably motivated by someone in management pushing the name change so that they could list it as a personal achievement as one of the "new" AI products they'd overseen the release of in the current zeitgeist.
I thought that renaming Active Directory to Entra ID was bad. Every single tech person who ever touched a Windows server knows what AD is. Then they change to name to something that sounds like it's going to give you an anal probe. What a dumpster fire...
Thank you for this. As someone who recently had to stumble back into turning a few knobs in (what I thought would be) AD for Office 365 licensing needs, after ~10 years outside of the MS sandbox, I had no earthly idea what Entra was. Until right now.
Microsoft is the worst offender at renaming their products and services with such bad confusing names I don't think it's helping anyone, including Microsoft.
I got there by going to office.com and clicking Products > Microsoft Office. Lol. Rofl, even. This has made my day. And we all thought calling their third generation console Xbox One was the worst possible branding decision.
Are they aware that people will struggle to find if Office is installed and that they will keep calling it Office til the end of times (aka the next rebranding that will revert back things) anyway?
Microsoft has the worst branding in tech. Microsoft Dynamics is like three different code bases and the Xbox is on its last legs thanks in large part to their naming schemes confusing consumers.
Having established brand awareness is a double-edged sword. Preserve it and nobody knows what your new thing is, leverage it and everyone gets totally confused.
IBM used to be a repeat offender. I recall trying to buy the WebSphere (Java) application server for a client and then finding out that IBM had slapped "WebSphere" on all their products including things like¹ MQ Series (a queue) and DB/2 (a database). It took me an hour to figure out the right item and it was an online purchase!
¹I might be misremembering the exact products but it was similarly absurd.
Yep, and they got very overexcited about "VisualAge" for this, that, and the other at one point. "VisualAge for C++ for OS/2" being one of the more coherent examples I guess...
This almost makes sense, but it is certainly not how Microsoft marketing did things. "Microsoft 365 Copilot app" is a suite of productivity apps, most well known for Word, Excel, and PowerPoint. It was formerly known as "Office". Microsoft 365 Copilot app includes Copilot as one of the apps.
This is all information taken from office.com, not some joke or exaggeration...
Can confirm - I'm looking at my Android phone now; the "Office Hub" app I knew as "Office" or "Microsoft 365" has, at some point, renamed itself to "M365 Copilot". To make things more obvious and less confusing, it's sitting right next to an app named "Copilot", which is their ChatGPT interface, and as far as I can tell, doesn't do anything copiloty with the Office suite.
Looking at the two side by side in an app manager, I see:
It's amazing to me how too much marketing education and/or experience seems to rot the brain. You learn on like day 4 of Marketing 101 that your brands should be distinct and recognizable, and hopefully some solid tips on how to do that. Cool. Solid. Seems obvious but there's plenty of things that seem obvious in hindsight that education can help you with.
Somewhere between that and a master's degree and 10 years at a prestigious marketing firm, though, apparently there's some lessons about how you should smear all your brands all over each other in some bid to, I presume, transfer any good will one may have had to all of them, but it seems to me that they could stand to send those people back to MKT101 again, because the principle of labeling what your product actually is seems to elude them after Too Much Education.
Think is, it's the latter lessons that are correct, because the ultimate arbiter of which marketing practices work or not is the market itself.
If anything, Marketing 101 works as a scaffolding but you learn the real lessons later on (basically like with every other vocational training wrapped in a degree, including especially computer science) - but also, and perhaps more importantly, it serves as a fig leaf. You can point to that and say, it's a Science and an Art and is Principled and done well It Is For The Good Of All Mankind, and keep the veneer of legitimacy over what's in practice a more systematized way of bringing harm to your fellow humans.
Also specifically wrt. brands - brands as quality signals mostly died out a decade or more ago; mixing them up is just a way to get their decaying corpses to trick more people for a little longer.
Yeah it’s really annoying how opaque they are about the model there. Always just ”GPT 4 based” or ”GPT 4o based” if you dig in their blog archives. Makes one unable to check it against benchmarks or see when it’s updated. Setting expectations. Is it a distill? Lower precision quant? An old revision? Who knows.
Microsoft has really screwed up on branding yet again. Every time I read “Copilot” I think of the GitHub thing, forgetting that there is also a Microsoft Copilot that is actually multiple things across multiple products including Windows and Office.
It’s also a website like ChatGPT apparently? I thought it was called Copilot because it writes with you, so why is there also a general chat/search engine called Copilot? Jesus.
I think you may be confusing Microsoft Copilot with Microsoft365 Copilot? The first doesn’t give you access to Microsoft Copilot Studio but that might also be available with Microsoft Copilot Pro.
This is pretty interesting, I had a very similar experience with Github Copilot's plugin inside Jetbrains IDE today (set to use 4o). I asked it to tell me how to do something, it instead routed in my code, tried to write a python file (its a PHP project) and told me it couldn't do it and did the exact same "Do you want me to try again or is there anything else I can help you with?"
Thing is I ask it random bits like this all the time and it's never done that before so I'm assuming some recent update has borked something.
ohh wow, that's bad, just tried this with Gemini 2.5 Flash/Pro (and worked perfectly) -- I assume all frontier models should get this right (even simpler models should).
I'd be willing to bet a more clear prompt would've given a good answer. People generally tend to overlook the fact that AIs aren't like "google". They're not really doing pure "word search" similar to Google. They expect a sensible sentence structure in order to work their best.
Maybe, but this sort of prompt structure doesn't bamboozle the better models at all. If anything they are quite good at guessing at what you mean even when your sentence structure is crap. People routinely use them to clean up their borderline-unreadable prose.
I'm all about clear prompting, but even using the verbatim prompt from the OP "ffmpeg command to convert movie.mov into a reasonably sized mp4", the smallest current models from Google and OpenAI (gemini-2.5-flash-lite and gpt-4.1-nano) both produced me a working output with explanations for what each CLI arg does.
Hell, the Q4 quantized Mistral Small 3.1 model that runs on my 16GB desktop GPU did perfectly as well. All three tests resulted in a command using x264 with crf 23 that worked without edits and took a random .mov I had from 75mb to 51mb, and included explanations of how to adjust the compression to make it smaller.
There's as much variability in LLM AI as there is in human intelligence. What I'm saying is that I bet if that guy wrote a better prompt his "failing LLM" is much more likely to stop failing, unless it's just completely incompetent.
What I always find hilarious too is when the AI Skeptics try to parlay these kinds of "failures" into evidence LLMs cannot reason. If course they can reason.
Less clarity in a prompt _never_ results in better outputs. If the LLM has to "figure out" what your prompt likely even means its already wasted a lot of computations going down trillions of irrelevant neural branches that could've been spent solving the actual problem.
Sure you can get creative interesting results from something like "dog park game run fun time", which is totally unclear, but if you're actually solving an actual problem that has an actual optimal answer, then clarity is _always_ better. The more info you supply about what you're doing, how, and even why, the better results you'll get.
I disagree. Less clarity gives them more freedom to choose and utilize the practices they are better trained on instead of being artificially restricted to something that might not be a necessary limit.
The more info you give the AI the more likely it is to utilize the practices it was trained on as applied to _your_ situation, as opposed to random stereotypical situations that don't apply.
LLMs are like humans in this regard. You never get a human to follow instructions better by omitting parts of the instructions. Even if you're just wanting the LLM to be creative and explore random ideas, you're _still_ better off to _tell_ it that. lol.
Not true and the trick for you to get better results is to let go of this incorrect assumption you have. If a human is an expert in JavaScript and you tell them to use Rust for a task that can be done in JavaScript, the results will be worse than if you just let them use what they know.
The only way that analogy remotely maps onto reality in the world of LLMs would be in a `Mixture of Experts` system where small LLMs have been trained on a specific area like math or chemistry, and a sort of 'Router pre-Inference' is done to select which model to send to, so that if there was a bug in a MoE system and it routed to the wrong 'Expert' then quality would reduce.
However _even_ in a MoE system you _still_ always get better outputs when your prompting is clear with as much relevant detail as you have. They never do better because of being unconstrained as you mistakenly believe.
I think the biggest issue is M365 Copilot was sold as something that would integrate with business data (teams, files, mail, etc.) and that never worked out quite well.
So you end up with a worse ChatGPT that also doesn't have work context.
Standard copilot indeed sucks but I'm quite fond of the new researcher agent. It spends much more time than any of the others I've tried, like Perplexity pro and openai.
From a one line question it made me a relevant document of 45 pages examining the issue from all different sides, many of which I hadn't even thought of. It spent 30 mins working. I've never seen Perplexity spend more than 5.
I won't't be surprised if they will significantly nerf it to save on computing costs. I think now they give it their all to build a customer base and then they nerf it.
Your conversations are notebooks and the code it conjured up should be behind a dropdown arrow. For visualization it seems to work fine (i.e. Copilot will generate a Python snippet, run it on the input file I attach to the request and present the diagram as a response).
In my experience Microsoft Copilot (free version in Deep Think mode) is way better than ChatGPT (free version) in most of things I throw at them (improving text, generating code, etc).
It's been increasingly more obvious people on hacker news literally do not run these supposed prompts through LLMs. I bet you could run that prompt 10 times and it would never give up without producing a (probably fine) sh command.
Read the replies. Many folks have called gpt-4.1 through copilot and get (seemingly) valid responses.
What is becoming more obvious is that people on Hacker News apparently do not understand the concept of non-determinism. Acting as if the output of an LLM is deterministic, and that it returns the same result for the same prompt every time is foolish.
Run the prompt 100 times. I'll wait. I'll estimate you won't get a shell command 1-2% of the time. Please post snark on reddit. This site is for technical discussion.
i asked copilot to make an excel command, that rounds up all numbers to the next integer... took 4 back and forth messages and 15 minutes until it was working... Google was 5 minutes
People are responding with..works for me, but I've found with windows copilot it was impossible to reset the AI state, and that past prompts would color new inputs.
The new chat, or new conversation buttons seem to do nothing.
I don't have experience with CoPilot, but I do with other LLMs. I'm not sure that omitting "provide me with" is enough to get the job done, generally, aside from being lucky that it correctly interprets the prompt. In my experience, other LLMs are just as prone to incorrect divination of what one means given telegraphic prompts.
I love Copilot in VSCode. I always select model "Claude Sonnet 3.7", when in Copilot since it lets me choose the LLM. What I love about Copilot is the tight integration with VSCode. I can just ask it to do something and it relies on the intelligence of Claude to get the right code generated, and then all Copilot is really doing is editing my code for me, reading whatever code Claude tells it to, to build context, etc.
That's why I said "in VSCode" because I have no idea what this guy is running, but it's almost a certainty the problem isn't copilot but it's a bad LLM and/or his bad prompt.
The Copilot integrated with Microsoft 365 doesn’t have a model switcher it just is what it is. You are talking about a completely different product that Microsoft calls the same names.
imo, any VSCode user needs both extensions: "GitHub Copilot" for inline completions, and "GitHub Copilot Chat" for interactive, multi-turn coding chat/agent.
I haven't tried GPT-4.1 yet in VSCode Copilot. I was using 'Claude Sonnet 4' until it was struggling on something yesterday which 3.7 seemed to easily do. So I reverted back to 3.7. I'm not so sure Sonnet 4 was a step forward in coding. It might be a step back.
First off, that’s a really bad prompt - LLMs don’t have this magic ability to read your mind. Second, despite how bad it is, Copilot just gave me the correct code.
When ChatGPT first came out, Satya and Microsoft were seen as visionaries for their wisdom in investing in Open AI. Then competitors caught up while Microsoft stood still. Their integration with ChatGPT produced poor results [1] reminding people of Tay [2]. Bing failed to capitalize on AI, while Proclarity showed what an AI-powered search engine should really look like. Copilot failed to live up to its promise. Then Claude.ai, Gemini 2.0 caught up with or exceeded ChatGPT, and Microsoft still doesn't have their own model.
I'll add, that Google search AI integration is quite good. I'm actually amazed how well it works, given the scale of Google Search. Nowadays I don't click search results in 50% of searches, because Google AI outputs response good enough for me.
Maybe we have a different Google AI down here in south Texas, but the Google search AI results I receive are laughably bad.
It has made up tags for cli functions, suggested nonexistent functions with usage instructions, it’s given me operations in the wrong order, and my personal favorite it gave me a code example in the wrong language (think replying Visual Basic for C).
It cracks me up that I can only find animated marketing bs pages about this that show nothing of interest, but I can't actually find how to use it despite minutes of looking.
Well done Google Marketing, well done.
Another product carefully kept away from the grubby little hands of potential users!
Its about half and half. Really depends on whether there are good results that gemini can summarize. If not, it gets creative. Chatgpt is generally much better.
ChatGPT is better, but Google owns all of the panes of glass (for now).
We've never seen a "Dog Pile vs Yahoo" battle when the giants are of this scale.
It'll be interesting to see if Google can catch up with ChatGPT (seems likely) and if they simply win by default because they're in all of the places (also seems likely). It'd be pretty wild for ChatGPT to win, honestly.
People are forming deep personal attachments to it. They think all their chat history is in context and Act as if it knows them personally and has formed an opinion about them. They are replacing social interaction with it. I doubt someone in that deep would want to switch to something new very easily.
A lot of people who are unfamiliar with how the technology works talk about "my GPT". Google that phrase, or start watching for it to crop up in conversation.
On the other end of the spectrum, there are lots of tiny little pockets like this:
My buddy learned this last week when we went out of the way to get gas at a wholesale store and he swore he looked it up and claimed it was open late. Well, it wasn’t.
The problem is that they made huge time consuming investments in technology to make copilot work with the various O365 controls, then confused everyone by slathering copilot on everything.
Microsoft hired the infamous guy from Inflection AI and fired the one responsible for Bing Chat which was actually good and it's all downhill from there. Bing Chat actually made Google nervous!
Would love to see how that plays out. It’s a pretty absurd situation to eagerly sign the deal and take the funding and then when better deals start showing up turn around and try blow it up.
I think the complaint would be two things, however IANAL
1. Lack of access to compute resource. Microsoft intentionally slowing OpenAI's ability to scale up and grow dominant quickly vs. Copilot, a competing product. Microsoft shouldn't be able to use it's dominance in the cloud compute market to unfairly influence the market for consumer AI.
2. Microsoft should not automatically gain OpenAIs IP in domains outside of the AI offerings that the company was supplying when the initial agreement was made. If it must be upheld the terms of the contract mean Microsoft get all of OpenAIs IP, then it block OpenAI from competing in other markets eg. Windsurf vs. VS Code.
Probably but it might not matter. They don't really need to compete on quality, just the simplicity of selling a suite that's bundled together to enterprise in the same way they did with Teams which is inferior to Slack in pretty much everyway (last time I had to use it anyway). Isn't their advantage always sales and distribution? Maybe its different this time, I don't know.
The biggest problem with Microsoft is their UX. From finding out where to actually use their products, to signing in, wading through modals, popups, terms and agreements, redirects that don’t work and links that point to nowhere. Along the way you’ll run into inconsistent, decades old UI elements and marketing pages that fully misunderstand why you’re there.
It’s a big, unsolvable mess that will forever prevent them from competing with legacy-free, capable startups.
They should delete all their public facing websites and start over.
Thanks. That was a great read. Somehow missed that. Two points to make:
1. Not sure why osnews charactarised this as an "epic rant". I thought he was remarkably restrained in his tone given both his role and his (reasonable) expectations.
2. This to me shows just how hard it is for leaders at large companies to change the culture. At some point of scaling up, organisations stop being aligned to the vision of the leadership and become a seemingly autonomous entity. The craziness that Bill highlights in his email is clearly not a reflection of his vision, and in fact had materialised despite his clear wishes.
When we think about how "easy" it would be for the executive of a large organisation to change it, those of us not experienced at this level have an unrealistic expectation. It's my belief that large organisations are almost impossible to "turn around" once they get big enough and develop enough momentum regarding cultural/behavioural norms. These norms survive staff changes at pretty much every level. Changing it requires a multi-year absolute commitment from the top down. Pretty rare in my experience.
That was epic. The type of email we all dread to receive at work. Can’t fault Bill for his detail though, most of those kind of emails are “website slow, make fast”.
> When SeattlePI asked Bill Gates about this particular email last week, he chuckled. “There’s not a day that I don’t send a piece of e-mail… like that piece of e-mail. That’s my job.”
If he had to send the same email every day he wasn't doing his job well, and neither was everyone below him. Even a fraction of that list is too much.
It's not only public facing websites - Azure is also pretty inconsistent and lately any offer to preview a new UI was a downgrade and I happily reverted back - it's like they have a mandatory font and whitespace randomizer for any product. Also while far from a power user I've hit glitches that caused support tickets and are avoidable with clearer UX. Copilot in Azure - if it works at all - has been pretty useless.
Their UX, their naming conventions from products to frameworks and services, pulled plugged on products, user hostility and so on are all pointing out the root of the problem is elsewhere. I think Microsoft is no longer reformable. It is a behemoth that will probably continue to coast along like a braindead gozilla zombie that just floats due to its sheer size.
Those stupid dialogs that may you think they will help you solve an issue but actually just waste 5-10mins "scanning" just to link you to irrelevant webpages that sometimes don't exist.
You’re contradicting yourself with that statement. Microsoft is seen as a mercenary… yes they make a lot of money, that’s proof they’re a mercenary. If you want to prove they are not then point to software categories they invented, not how much money they are making.
The biggest issue with Copilot might not be the model itself, but the naming strategy. One name is used for several completely different products, and users end up totally confused. You think you're using GitHub Copilot, but it's actually M365 Copilot, and you don't even get to choose the model. Microsoft really needs to make this clearer.
You probably are not a customer as a decision maker in a big traditional company/organization. MS is obfuscating on purpose so they can say in sales decks that if you buy this, you get all these copilots and your Fortune 1000 business is AI-proof. What they are left out is that not every copilot is equal.
For some reason I had also gotten the impression that Copilot was powered by OpenAI in some way. Perhaps the Microsoft OpenAI partnership gave me that impression.
I also wasn't aware that there where an OpenAI/Microsoft rivalry, I had the impression that Microsoft put a lot of money into OpenAI and that ChatGPT ran on Azure, or was at least available as an offering via Azure.
Copilot is powered by a Microsoft-hosted version of OpenAI's models. If you ask it, it says "I'm based on GPT-4, a large language model developed by OpenAI. Specifically, you're chatting with Microsoft Copilot, which integrates GPT-4 with additional tools and capabilities like web browsing, image understanding, and code execution to help with a wide range of tasks."
Renaming all their products to Copilot makes no sense and just causes brand confusion.
Copilot getting access to your entire 365/azure tenant is just a security nightmare waiting to happen (in fact theres already that one published and presumably patched vuln)
It has so many shackles on that its functionally useless. Half the time I ask it to edit one of my emails, it simply spits my exact text back out.
Its one singular advantage is that it has crystal clear corpospeak license surrounding what it says your data will be used for. Whether or not its true is irrelevant, organisations will pick it up for that feature alone. No one ever got fired for choosing ibm etc.
I use many LLM tools (ChatGPT, Claude, Gemini, GitHub Copilot, etc), I have never ever gotten any version of MS Copilot to do anything useful for me. I've been stunned at how they can use the same models that ChatGPT does, copy their use cases, and still deliver a turd.
The Github Copilot (in VS Code especially) is the only application of LLMs that I've found useful from Microsoft. I would have loved amazing Copilot support in Word for working on a large complex document, but I haven't found that to work well.
YMMV, but I found it useful for drafting a pull request on GitHub, where it basically just did all the boring work, including finding the particular line in a large codebase that was throwing the error. It wasn't a hard problem, but it still would have required a bit of mental effort on my part, and I'd rather spend that reading a book.
I've come to the conclusion that once companies get big enough, they are unable to build anything really useful. I'm sure there exceptions, but it feels like 99% of the time this is true. The best they can do is acquire a company and hope that goes well.
msft had a massive edge. it had exclusive access to models + had web search before anyone.
they flopped this royally, just like windows mobile. they created a shitty ux by shoving it inside the bing app, then they decided to charge for it instead of capturing all enterprise value.
lastly, the product has stalled and missed on their biggest opportunity which is tapping into the data. you can think it's because of how complex it must be, but then openai and everybody else did it.
it's truly a lesson in product mismanagement, once again, from microsoft
Just say the quiet part out loud, Microsoft is such a large anti-competitive company they literally don't have to build competing products. Customers are not even going to evaluate other options because it will be bundled with their other mediocre services.
It was bound to happen. Corporations always commit suicide once they're successful and it almost always looks the same. It's why I don't invest time in non-free software.
If only MS copilot was an actual co-pilot in my daily flight of work… but no, it cannot actually draft an email, create planner tasks or book meetings. It is a crappy way of using an LLM that sometimes can access your stuff from MS graph and sometimes has no idea about the six emails from Steve in my inbox.
And no, its PowerPoints are not good either, they are LinkedIn in ppt-form.
1. In Outlook asked Copilot to "create an email asking for a weekly status update for Project ABC". Iterating a bit, I asked it to be more casual and shorter. Here's what I got:
Hey Team,
Hope you're all doing great. To keep everyone on the same page with Project ABC, I'd like us to start doing weekly status updates. Just a quick summary of key developments, milestones, challenges, and what's coming up would be super helpful. This way, we can coordinate better and tackle any issues right away.
Thanks for your cooperation and hard work. Looking forward to your updates!
That seems pretty on point. Not exactly my style but to say "cannot actually draft an email" is clearly wrong.
2. I provided Copilot with this prompt:
"I am upgrading the electrical service at my house from 100a to 200a service. Create a project plan in Planner for this project."
Here's the response I got:
Nice! Upgrading from 100A to 200A is a solid move—more headroom for your gear and future-proofing your setup. Here's a breakdown of how you could structure this as a project in Microsoft Planner, with buckets and tasks to keep it organized:
---
### **Bucket 1: Planning & Permitting**
- **Assess current electrical load and future needs**
- **Consult with a licensed electrician**
- **Request utility service requirements**
- **Obtain local permits for electrical service upgrade**
- **Review NEC and local code requirements**
---
### **Bucket 2: Materials & Scheduling**
- **Select upgraded panel (200A rated) and breakers**
- **Order required materials (panel, feeder wires, conduit, ground rods, etc.)**
- **Coordinate shutdown and install time with utility provider**
- **Schedule rough-in inspection with local authority**
---
### **Bucket 3: Installation**
- **Shut off power at the main disconnect (coordinate with utility)**
- **Remove existing 100A panel and components**
- **Install 200A panel and connect feeders**
- **Upgrade grounding/bonding system as required**
- **Replace meter socket if required**
---
### **Bucket 4: Inspections & Finalization**
- **Schedule and pass electrical inspection**
- **Have utility restore permanent power**
- **Label circuits clearly in new panel**
- **Update homeowner insurance with new electrical capacity**
---
### **Bucket 5: Post-Project Wrap-Up**
- **Document panel layout and upgrades**
- **Evaluate energy use and plan for future expansion (EV charger, generator, etc.)**
- **Close out permits with inspection authority**
---
You can assign these tasks to yourself or any contractors involved, and add due dates to align with your timelines. Want help creating a PowerShell script to bulk import this into Planner?
It did produce a working script that required 1 configuration edit. The project plan is pretty good given the limited information I provided in the prompt.
3. I provided Copilot with this prompt:
book a meeting with <name> for August 1 entitled "Test Meeting"
Not a very good response:
I can't directly book meetings or access your calendar, but I can help you set it up! Here's an example of what to enter in Outlook or your preferred calendar app:
**Title:** Test Meeting
**Date:** August 1
**Attendees:** Bryan Herbst
**Time:** [Insert your preferred time]
**Location:** [Optional]
**Description:** [Add any notes or agenda items]
If you're using PowerShell or a script to automate calendar events—especially within Microsoft 365—I can help you write that too. Want to go that route?
It seems like these examples show that it has about the same capabilities as a basic chat interface like Claude or ChatGPT without any benefit from being integrated with the Microsoft ecosystem.
You nailed it. Microsoft should have a huge advantage with depth of integration, but for some reason treats Copilot in office as a glorified chat iframe. It's a huge missed opportunity.
If you ask it to 'send' an email instead, as I did the first time I saw it pop up (in my email client...), my first real use case, it will tell you it can't, just like your calendar example.
Even 'draft' really, if you forget what you've done, close your eyes and think about it from scratch, surely you expect that to be drafted in your ...drafts?
Cool, so you've spent just as much time "iterating a bit" as you would have done typing, your thinking and typing skills have atrophied a bit more, and you've made your colleagues lives that bit duller by sending them something written by the "average person".
I'm confused over what anyone means when they say "Copilot", since it could mean the VS Code editor features or various features on github.com or the thing that Microsoft sell as part of their 365 office software.
Don't forget about the Copilot in Windows, which is different from the Copilot in Bing, which is different from Copilot in Edge, which is different from the Copilot in Copilot Studio... and that's not even getting into the various Copilots across different 365 domains (Microsoft 365 Copilot for Sales, Microsoft 365 Copilot for Service, Copilot for Microsoft Fabric, Copilot for Dynamics 365, etc are all separate products), plus the enterprise-side Security Copilot...
Good old Microsoft naming. I'll never understand how they can think it's a good idea to release multiple entirely different products and call them all variations of the same thing. One would think they would have solved this problem a decade ago and yet every few years it happens again.
At the top-right of that page, it has a little icon indicating 'enterprise data protection' but I can't see any way for me (the user) to know what type of Copilot licence (if any) the accountholder has assigned to my user account.
If you have the fancy copilot pro, you'll see it in the rest of your office account, such as outlook, where additional features are available such as email summarize etc.
It's not just Microsoft. All of these companies competing in "AI coding" went to having "premium" requests when using bigger models and then unlimited usage with okayish models.
Why is this being downvoted? I’ve seen similar behavior, and it’s not outside the realm of possibility that MS would choose meager context windows and API limits in favor of profit.
A lot of discourse about Microsoft that paints them in a less than positive light gets downvoted without reason, I read a blog post a while back about their "brand reputation farm" that follows social media posts and tries to de-rank or drown out content. If I find the link I'll update this comment.
I'm not sure whether Microsoft Copilot and ChatGPT use different system prompts or if there's something else behind it, but Copilot tends to have this overly cautious, sterile tone. It always seems to err on the side of safety, whereas ChatGPT generally just does what you ask as long as it's reasonable.
So it often comes down to this choice:
Open https://copilot.cloud.microsoft/, go through the Microsoft 365 login process, dig your phone out for two-factor authentication, approve it via Microsoft Authenticator, finally type your request only to get a response that feels strangely lobotomized.
Or… just go to https://chatgpt.com/, type your prompt, and actually get an answer you can work with.
It feels like every part of Microsoft wants to do the right thing, but in the end they come out with an inferior product.
I think that must be it. The system prompt is likely it.
Just yesterday was I talking to a customer who was so happy with our "co-pilot" compared to ChatGPT and others that he wants to roll it out to the rest of the company.
We use Azure-OpenAI + RAG + system prompt targeted at architects (AEC). It really seems the system prompt makes all/most of the difference. Because now, users will always get answers targeted towards their profession/industry.
I wonder if a Lexus/Toyota Acura/Honda Lamborghini/Audi OpenAI/Microsoft marketing split isn't in the best interests of tech giants going forward since LLMs are nondeterministic, unlike the deterministic nation-states they've built up till now...
If they want to hurt Microsoft where it hurts, OPenAI should build an agent to write mark-down, docx, and html versions of documents from simple chat or audio prompts. Imagine dictating your documents to AI and have it build upload and convert the document into relevant file formats..... I can't wait!!
My enterprise onboarded Copilot and Copilot agents and it’s fairly successful.
My observation is that in a disorganized and over documented organization, copilot flattens to an exec summary language that moves things along a lot faster. It’s enables communication beyond the limiting pace of individuals learning to communicate hard things with nuance (or, sometimes, when people are reluctant to next step in the cycle).
It lifts to a baseline that is higher than before. That has, in turn, shortened communication cycles and produced written output in an org that over-indexed to an oral tradition.
I prefer Gemini Pro 2.5 - Google is really fumbling the bad by not having a solid subscription access model for it (plus some CLI coding agent) and enterprise access.
In hindsight, MSFT made out like a bandit in that deal since OpenAI seemingly tapped out of MSFT resources already. MSFT can always spin up its own LLM. It’s not that expensive for them and they can wait it out for tech to mature a bit more.
The problem is it is very hard to make changes and build innovative new products within big tech, at a pace to compete with smaller companies. Big tech succeeds despite it since the resource disparity is too much.
Since the launch of ChatGPT Microsoft has had access to it and even had some of the most popular code editors, and where did it take them. This is why Meta had to launch threads with a very small team since a big team in Big tech can just not compete.
Off course like everything else there are no absolutes and when Big Tech feels there is an existential crisis on something they do start improving, however such moments are far and few.
Our management introduced copilot last year, there was some mild hype, people were curious, gave it a spin, but it didn’t stick around in many conversations.
Now that everyone has access to Claude and claude-code, Copilot barely gets mentioned anymore. Maybe this wave dies down or they improve it, anyway these tools still have a long long way to go.
I read today that OpenAI is planning a ‘AI super app’ that would have canvas, word processing, etc., all in one work app. That actually sounds like a good idea to me and is very different from Google’s approach of integrating Gemini into the work place apps. Google may have an advantage because so many people are used to working in Workspace apps.
Thank you! Sadly as a struggling entrepreneur I do not have $299 to blow on one article. I'll take it as validation that my idea has legs... and I'm likely slightly ahead of them
I don’t understand how it’s not more useful to most people with copilot subscriptions in work. It has access to my works OneDrive, it really should be the most commonly used LLM
Aside from the product value and market sentiment around M365 Copilot. One should wonder about the timing of this article so close to Microsoft's fiscal year end.
I have a feeling a lot of "success" of OpenAI in the enterprise space is simply nepotism or in-network tech mafia buddies migrating from diversity hires to ChatGPT subscriptions.
I've been using it to automate some very basic html/js stuff and it's always forgetting context and stomping on old stuff that's not related to the prompt. I guess this is just leaky abstraction of its in context memory compression. It's manageable but all it's doing for me is allowing me to be lazier. It certainly isn't making me any more productive.
If I try to get it to do stuff outside my domain expertise it's making errors I can't catch. So I suppose if move fast and break things works for your business, then that's fine.
But that begs the question, a much better product than what?
Either way, we saw them fire a bunch of people and "replace them with AI," so it's not out of the question this is a more toward "AI tech leadership" tax subsidization as DEI is phased out.
We're paying for Copilot for Office365. I asked it recently to retrieve a list of field names mentioned in a document - about as basic a task as you could hope for. It told me it couldn't do so.
My precise request: "Extract the list of field names in Exhibit A."
Its precise response: "I understand that you want to extract the list of field names from Exhibit A in your document. Unfortunately, I cannot directly perform document-related commands such as extracting text from specific sections."
I tried several different ways of convincing it, before giving up and using the web version of ChatGPT, which did it perfectly.
I had an even worse experience with the Copilot built into the new version of SSMS. It just won't look at the query window at all. You have to copy and paste the text of your query into the chat window ... which, like, what's the point then?
I only used free Microsoft Copilot once back when GPT-4 came out and it wasn’t free on OpenAI yet. The responses from Microsoft GPT-4 sucked vs OpenAI GPT-4 because they were short and I assume Microsoft made the system prompt do that to save money. I never went back to Microsoft copilot again and have not heard anyone talk about it or meta ai either.
Microsoft Copilot uses their own model that is originally based on GPT-4 if I’m not mistaken.
But, it’s mostly a RAG tool, “grounded in web” as they say. When you give Copilot a query, it uses the model to reword your query into an optimal Bing search query, fetches the results, and then crafts output using the model.
I commend their attempt to use Bing as a source of data to keep up to date and reduce hallucinations, especially in an enterprise setting where users may be more sensitive to false information, however as a result some of the answers it gives can only be as good as the Bing search results.
It’s not necessarily terrible. It just sometimes leaves you wishing it was “smarter”. When I get a bad result, trying the same query on ChatGPT gives a much better response.
Real talk! Copilot is so bad. It’s literally useless. And they charge an absolute arm for it. Like how is it soooo much worse than Chat? I am a frustrated monky when I use Copilot.
Microsoft's decision to name this product Copilot has to be the result of some form of internal sabotage, I refuse to believe otherwise.
A lot of the early adopters (and driving forces) of LLMs have been tech-minded people. This means it's quite a good idea NOT to confuse them.
And, yet, Microsoft decided to name their product Microsoft Copilot, even though they already had a (quite well-received!!) Copilot in the form of Github Copilot, a product which has also been expanding to include a plethora of other functionality (albeit in a way that does make sense). How is this not incredibly confusing?
So what actually _is_ Copilot? Is there a bing copilot? A copilot in windows machines? Is it an online service? (I saw someone post a link to an office 365)?
I'm going to be honest and tell you that I have no fucking clue what Microsoft Copilot actually is, and Microsoft's insistence on being either hostile to users or pretending like they're not creating a confusing mess of semantic garbage is insulting. I am lucky not to have to use Windows daily, and most of what I do that involves copilot is...Github Copilot.
I am knee-deep into LLMs. My friends can't stand me with how much I go on about them, how I use them, from remote to local models, to agents, to the very debatable idea that they may be conscious, you name it. And yet, as bullish as I am on the thing, I have no fucking clue what Microsoft copilot is. Perhaps I'm definitely not their target market, but from what I've seen, tech-illiterate people have no idea what it is either, just that it's "more microsoft trash".
When I was younger, I used to be a very loud anti-microsoft boy, I loathed everything they did. Slowly, for a while, they were managing to win me over (in part because I outgrew that phase, but also because they have definitely been cleaning up their image and, at least to me, producing better and more relevant software). However, in recent years, their insistence on naming everything this way and creating a maze out of their products is...baffling. I feel myself not being able to stand MS again.
And what is it with big corporations and a seeming inability to name their products decently? This is appalling. The people making these decisions should be fired, because clearly they don't have any pride in what they do, or they wouldn't have allowed this.
> Microsoft's decision to name this product Copilot has to be the result of some form of internal sabotage
If you look at this in isolation, yes. If you look at this historically, it's totally on-brand for Microsoft. Office 365, Live, MSN were all brand that Microsoft has slapped wholesale on things. Microsoft has always been reactive when it comes to branding, rather than proactive.
I'm reminded of when .NET was released suddenly everything was .NET, even an office release was named after it. Then it finally narrowed down into the programming languages we know and love or hate depending on your vibe. I assume this will happen here too eventually.
Everything is Copilot, but they're all different products, and one of them is just a launcher to Office apps, each with their own assistant called Copilot
The problem is Coilot is dumb. Allegedly using the same models ChatGPT does, but Microsoft seems to have done something to Copilot which lobotomises it so badly it's unusable for anything serious. Great for the MS ecosystem integration, but as a general purpose tool, it's nowhere near ChatGPT.
I program at a non-tech Fortune 100 company. Our team is on a pilot program to try out AI-assisted programming at the company, and Cursor with OpenAI models are mostly what we are using. I have it integrated into my standard IDE workflow and try to write unit tests and the like with it.
https://archive.ph/cemKI
My work gives us Copilot access since We're a Microsoft Shop™
It's too bad Copilot is by far the dumbest competitor in the space
My favorite interaction so far was when I prompted it with:
Sure, it's not the most direction instructions, but I tend to give it just enough to get the job done, assuming the LLM knows what its purpose is as an LLM, and it always works with the other chatbots.Copilot's response:
Note that I didn't cut anything out. It didn't actually provide me any "Python code above"I pasted your prompt:
into the Copilot app just now. Here's the response: with this explanationI did the same thing for several iterations and all of the responses were equally helpful.
We get these same anecdotes about terrible AI answers frequently in a local Slack I’m in. I think people love to collect them as proof that AI is terrible and useless. Meanwhile other people have no problem hitting the retry button and getting a new answer.
Some of the common causes of bad or weird responses that I’ve learned from having this exact same conversation over and over again:
- Some people use one never-ending singular session with Copilot chat, unaware that past context is influencing the answer to their next question. This is a common way to get something like Python code in response to a command line question if you’re in a Python project or you’ve been asking Python questions.
- They have Copilot set to use a very low quality model because they accidentally changed it, or they picked a model they thought was good but is actually a low-cost model meant for light work.
- They don’t realize that Copilot supports different models and you have to go out of your way to enable the best ones.
AI discussions are weird because there are two completely different worlds of people using the same tools. Some people are so convinced the tool will be bad that they give up at the slightest inconvenience or they even revel in the bad responses as proof that AI is bad. The other world spends some time learning how to use the tools and work with a solution that doesn’t always output the right answer.
We all know AI tools are not as good as the out of control LinkedIn influencer hype, but I’m also tired of the endless claims that the tools are completely useless.
The "pick your model" thing is so stupid.
"How dumb do you want your AI to be?"
"Why do I have to select?"
"Because smart costs money"
"So... I can have dumb AI but it's cheaper?"
"Yes"
"How would the average person know which to pick?"
"Oh you can't know."
I hope they can invent an AI that knows which AI model my question should target cheaply.
And then the model names & descriptions are virtually useless at providing any guidance.
ChatGPT lets me choose between GPT-4o ("Great for most tasks"), o3 ("Uses advanced reasoning"), o4-mini ("Fastest at advanced reasoning"), and o4-mini-high ("Great at coding and visual reasoning").
Is what I'm doing "most tasks"? How do I know when I want "advanced reasoning"? Great, I want advanced reasoning, so I should choose the faster one with the higher version number, right? etc.
Then there's GPT-4.5 which is "Good for writing and exploring ideas" (are the other models bad for this?), and GPT-4.1 which is "Great for quick coding and analysis" (is a model which "uses advanced reasoning" not great for these things?)
Can you describe your task and then ask ChatGPT which model you should use?
This presents the same problem, since none of the models are indicated to be best at choosing the model to use for a task.
Try different ones out and learn which works best for what type of work?
Without getting too much into semantics, I would suspect that most individuals would have trouble classifying their "type of work" against an opaque set of "type of work" classifiers buried in a model.
He was suggesting that you try different models for the same thing and see which output you like best. It's tedious but at least you get an answer.
Can't you just run a few examples by hand to see how they perform for your tasks, before committing to any for production?
> before committing to any for production
I'm talking about ChatGPT, which is a Web and desktop app where users run interactive sessions. What does "production" mean in this sense?
I think I misunderstood what people were talking about. Somehow I thought it was about their APIs, for specific uses in other apps.
To their credit, they did get this part correct. "ChatGPT" is the user-facing apps. The models have terrible names that do not include "ChatGPT".
Anthropic, by contrast, uses the same name for the user-facing app and the models. This is confusing, because the user-facing apps have capabilities not native to the models themselves.
It’s simple - practice using them instead of complaining. Maybe you’ll figure out the differences on your own.
As a person who uses LLMs daily, I do in fact do this. Couple problems with this approach:
- there are billions of people who are not accustomed to using software this way, who are in the expected target market for this software. Most people cannot tell you the major version number of their mobile OS.
- this approach requires each individual to routinely perform experiments with the expanding firmament of models and versions. This is obviously user-hostile.
Anyway, my hot take here is that making things easier for users is better. I understand that is controversial on this site.
Imagine if this is what people suggested when I asked what kind of screwdriver I should use for a given screw, because they're all labelled, like, "Phillips. Phillips 2.0. Phillips.2.second. Phillips.2.second.version 2.0. Phillips Head Screwdriver. Phillips.2.The.Second.Version. Phillips.2.the.second.Version 2.0"
You bring up the important point that for a company who earns money off of tokens wasted, a confusing selection of models can translate into extra spend to experiment with tweaking them.
Some users may not appreciate that, but many more might be drawn to the "adjust the color balance on the TV" vibes.
> I hope they can invent an AI that knows which AI model my question should target cheaply.
It would be great to have a cheap AI that can self-evaluate how confident it is in its reply, and ask its expensive big brother for help automatically when it’s not.
That would actually be the AGI we are waiting for, since we - as humans, in surprisingly big portion of all cases - don't know how or can't seem to do that either!
On the other hand, ChatGPT seems to be getting better at knowing when it should Google something for me rather than hallucinate something.
Shouldn’t asking a more expensive model for input be a similar level of «tool use»?
I usually feel with chatgpt picking a model is like "Which of the three stooges would you like to talk to, curly, larry, or moe (or worse, curly joe)?" I usually only end up using o3 because gpt-40 is just that bad, so why would I ever want to talk to a lesser stooge?
If paying by API use it probably makes more sense to talk to a lesser stooge where possible, but for a standard pro plan I just find the lesser models aren't worth the time to use in frustration they cause.
I think you make a good point. Cursor is doing a basic “auto” model selection feature and it could probably get smarter, but to gauge the complexity of the response you might need to run it first. You could brute force it with telemetry and caching if you can trust the way you measure success.
I imagine that we need a bootstrap ai to help you optimize the right ai for each task.
I don’t think I’d trust the vendor’s ai to optimize when they will likely bias toward revenue. So a good case for a local ai that only has my best interests at heart.
Currently, the guidance from vendors is “try it and see which yields the best results” which is kind of like “buy this book, read it, and see if you like it” and how of course the publisher wants you to take this action because they get their money.
> I hope they can invent an AI that knows which AI model my question should target cheaply.
Isn't that the idea of OpenRouter?
Not exactly, but yeah. OpenRouter is a unified API, directory and billing system for LLM providers.
I think you are getting confused by the term "Model Routing", which to be fair OpenRouter does support, but it's a secondary feature and it's not their business focus. Actually OpenRouter is more focused on helping you choose the best provider for a specific open model based on their history of price, speed, reliability, privacy...
The model routing is simply provided by NotDiamond.ai, there are a number of other startups in this space.
https://openrouter.ai/docs/features/model-routing
"I’m also tired of the endless claims that the tools are completely useless."
Who claimed that here?
I read a claim that Copilot is dumber than claude and ChatGPT and I tend to confirm this.
"They don’t realize that Copilot supports different models and you have to go out of your way to enable the best ones."
So possible that none of us who thinks that, went out of our way to find outy when there were working alternatives, but it would be still on Microsoft making it hard to make good use of their tool.
Yeah I'm not sure why they'd think my point was that LLMs are useless. Clearly I'm integrating them into my work, I just think Copilot is the dumbest. It's given me the most nonsensical answers like the example I provided, and it's the one I use the least. Which is even crazier when you consider we're on a paid version of Copilot and I just use free ChatGPT and Claude.
Your entire comment sure read a lot like you were calling the tools useless. You even used the worst possible prompt to make your point. That’s likely why people are reacting badly.
I said the Copilot is the worst competitor in the space.
Where did I say anything in general about LLMs being useless?
The thing responses like this miss I am pretty sure is that this is a nondeterministic machine, and nondeterministic machines that are hidden by a complete blackbox wrapper can produce wildly different results based on context and any number of independent unknown variables. so pasting “i did the same thing and it worked fine” is essentially this argument’s version of “it worked on my local.” Or it essentially boils down to “well sure, but you’re just not doing it right” when the “right” way is undefined and also context specific.
You’re both right. Some problems should be solved with better user education. And some should be solved with better UX. It’s not always clear which is which. It’s too simple to blame everything on user error, and it’s too simple to blame everything on the software.
Cell phones are full of examples. So much of this stuff is obvious now we’ve been using them for awhile, but it wasn’t obvious when they were new. “My call dropped because I went in a tunnel” is user error. “My call cut out randomly and I had to call back” is a bug. And “my call cut out because my phone battery ran out” is somewhere in the middle.
For chatbots, lots of people don’t know the rules yet. And we haven’t figured out good conventions. It’s not obvious that you can’t just continue a long conversation forever. Or that you have to (white consciously) pick which model you use if you want the best results. When my sister first tried ChatGPT, she asked it for YouTube video recommendations that would help when teaching a class. But none of the video links worked - they were all legitimate looking hallucinations.
We need better UX around this stuff. But also, people do just need to learn how to use chatbots properly. Eventually everyone learns that calls will probably drop when you go into a tunnel. It’s not one or the other. It’s both.
This is part of why I really like local models. I always use the same random seed with mine so unless I'm using aider the responses are 100% deterministic. I can actually hit c-r in my shell to reproduce them without having to do anything special.
Some are more deterministic than others, e.g. Gemini Flash.
The non-determinism comes from the sampler not the model.
I always thought it was packaged with the model.
The memory feature also can be a problem, it injects stuff into the prompt context that you didnt explicitly write with the intent it will help because it knows you are a python programmer so lets respond with a python script instead of our usual ffmpeg cli command.
"Spin the chatroulette again and see if you vibe something better" is not a foundation for a business.
Well, unless your business is selling vibes.
Everything is like this.
I saw an IT professional google “My PC crashed” to diagnose a server bluescreen stop error.
Reminds me of
I’m Feeling Lucky -> bad result -> Google search is useless
1. I would say that nobody did that, so you are making up a straw man
2. The Copilot or ChatGPT or Claude "Ask" buttons should then be renamed to "I'm feeling lucky". And that would be the only button available.
Yeah except Feeling Lucky is the only button you can press and people blame you if they got lucky
I am 67.87% certain they make it dumber/smarter during the day. I think it gets faster/better during non-business hours. This needs to be tested more to confirmed, though. However, they have exactly ZERO transparency (especially the subscription model) into how much you are consuming and what you are consuming. So it doesn't really help with the suspicions.
I remember reading an article about different behavior between summer and winter. So, working better/worst in business hours doesn't sound completely crazy.
But they turning some knobs based on load also looks razonable.
What you and many other seem to miss is that the LLM is not deterministic.
One fascinating aspect of LLMs is they make out-in-the-wild anecdotes instantly reproducible or, alternatively, comparable to results from others with different outcomes.
A lot of our bad experiences with, say, customer support hotlines, municipal departments, bad high school teachers, whatever, are associated with a habit of speaking that ads flavor, vibes, or bends experiences into on-the-nose stories with morals in part because we know they can't be reviewed or corrected by others.
Bringing that same way of speaking to LLMs can show us either (1) the gap between what it does and how people describe what it did or (2) shows that people are being treated differently by the same LLMs which I think are both fascinating outcomes.
LLMs are definitely not instantly reproducible. The temperature setting adjust randomness and the models are frequently optimized and fine tuned. You will very different results depending on what you have in your context. And with a tool like Microsoft copilot, you have no idea what is in the context. There are also bugs in the tools that wrap the LLM.
Just because other people on here say “worked for me” doesn’t invalidate OPs claim. I have had similar times where an LLM will tell me “here is a script that does X” and there is no script to be found.
I was intentionally broad in my claim to account for those possibilities, but also I would reject the idea that instant reproducibility is generally out of reach on account of contextual variance for a number of reasons.
Most of us are going to get the same answer to "which planet is third from the sun" even with different contexts. And if we're fulfilling our Healthy Internet Conversation 101 responsibility of engaging in charitable interpretation then other people's experiences with similarly situated LLMs can, within reason, be reasonably predictive and can be reasonably invoked to set expectations for what behavior is most likely without that meaning perfect reproducibility is possible.
I think it really depends on the UI, like if it was in some desktop native experience maybe it accidentally produced a response assuming there would have a code canvas or something and sent the code response under a different JSON key.
We're also seeing a new variant of Cunningham's law:
The best way to get the right answer from an LLM is not to ask it the right question; it's to post online that it got the wrong answer.
> One fascinating aspect of LLMs is they make out-in-the-wild anecdotes instantly reproducible
How? I would argue they do the exact opposite of that.
Asking the number of Rs in the word Strawberry is probably the most famous one.
AI probably hates him so it acts dumb.
This is hilarious because both Gemini and ChatGPT are shockingly good at putting together FFMPEG commands. They can both put together and also understand the various options and stages/filters.
I really like the final remark, "or is there anything else that I can help you with"?
Yeah, like how about answering the fucking question? lol
sed, awk, docker, ffmpeg, etc... are problebly the most Google-ed commands. It's kinda shocking bad that MS LLM model is bad at this .
Same here (MS Shop).
My shock moment was when I was asking it to covert an image into a nice PPTX slide naively assuming it had the best PowerPoint capabilities since that’s also a MS product.
It returned a non formatted text box on one slide. I had to laugh so hard people on the office noticed.
Gemini-in-sheets is the same way
i asked it to make a sheet look nicer and it couldn't without me explicitly telling it what i wanted done.
When i told it to do certain things, it's like "that's not implemented yet, stay tuned!"
Oh yeah, one time I uploaded a couple of files to figure out an issue I was having, and it offered to rewrite the files to fix the issue. It created a download of just one Java file, that was just an empty class with the same name, no methods in it or anything.
Yeah, working in an MS shop in the past couple years, that's what I've been saying ever since first iteration Copilot for MS Office came out, and it's true to this very day: you want AI to help you with your MS Office files? Ask ChatGPT. Or get API keys and use some frontend with a half-assed third party plugin that knows how to convert Markdown into .docx or such. Results are still an order of magnitude better than the Copilot in MS Office.
I blame it on corporate reluctance to take risks that could result in bad press. They put a lot of work into screwing Copilot up. I mean, they had it running GPT-4 back when GPT-4 was the new hot thing. The way it was comically useless is not something that "just happens" - as evidenced by the fact, that just running GPT-4 via API yourself produced good results by default.
Thats a good note. I have all of my documentation in markdown (which Microsoft parades on with Github, VSCode, docs.microsoft.com etc) but Copilot cant or wont read these files. I had to pandoc everything over to docx files before it even saw them. Pretty wild.
Also in line with this, Copilot 365 seems to not get how charts work. I asked it with multiple different massaged data sets and it kept giving worse and worse answers even after resetting the chat and data to as simple as possible (think 10 dates, 2 colums of integers) until it ultimately gave me a blank chart. I gave up and asked GPT.
I cannot reproduce this in any version of copilot?
Copilot with outlook.com
Copilot base one that comes with M365,
And the add-on one for 30$/mo.
Copilot in VS code
All produce: ffmpeg -i movie.mov -vcodec libx264 -crf 23 -preset medium -acodec aac -b:a 128k output.mp4
Which is not surprising because its just an Open AI 4o call... so how are you getting this?
> I cannot reproduce this in any version of copilot?
Because LLM-based service outputs are fundamentally not-reproduceable. We have no insight into any of the model settings, the context, what model is being run, etc.
The fact they have so many different version is hilarious to me; whern they're in the context of getting every day entreprise customers.
Already gemini giving me flash or pro like i'm supposed to know and decide which i need is missing the point, but 4 of them is crazy.
Copilot is not OpenAI
It is OpenAI weights under the hood, unless something changed recently?
At least Bing Chat was GPT-4-base with Microsoft's own fine-tuning.
And there's been a number of iterations of 4o. So it could be a really old, pruned one if upgraded from gpt-4.
The pipeline Microsoft is using for Copilot products hides actually what models they are using, and you have no influence over it. Sometimes they use smaller models, but I have no clear source from Microsoft saying this ...
Copilot is OpenAI in branding since day 1.
https://en.wikipedia.org/wiki/GitHub_Copilot#Implementation
What model? It got right first try here with your exact prompt using the free GPT-4.1 model.
https://i.imgur.com/toLzwCk.png
ffmpeg -i movie.mov -c:v libx264 -preset medium -crf 23 -c:a aac -b:a 128k movie.mp4
BUT, I have this in my copilot-instructions.md file:
That's GitHub Copilot, not "microsoft copilot", the bot built into the Microsoft 365 landing site. it doesn't give you model options, for example.
Whoever decided to use the same brand for different experiences should be demoted at minimum. What a way to destroy trust.
I suggest you never visit https://www.office.com/
Microsoft Office is one of the most recognizable and valuable brands ever. I'm quite terrible at marketing, and even I can recognize how stupid the rebrand was.
This is the same company that thought it would be an awesome idea to rename "Microsoft Remote Desktop" to "Windows App" on MacOS.
It wasn't. It's the dumbest name ever.
Don't forget Windows Mail->Outlook (New), replacing Outlook with Outlook (Classic). Same with "Teams (Classic)" and "Teams (New)"
Or the Teams download page that had two different versions - Teams for Home and Teams for Work/School.
Or .NET->.NET Core & .NET Framework->Back to .NET again.
Maybe they figured their brand was too recognizable and valuable, and had to knee-cap it to restore the cosmic balance of the Great Material Continuum.
EDIT:
There's even a rule of acquisition that could possibly apply here: #239 - Never be afraid to mislabel a product.
Literally this. It's one of the strongest names in all of software. It really boggles the mind.
I think there's little chance it won't be changed back. Changing the name was probably motivated by someone in management pushing the name change so that they could list it as a personal achievement as one of the "new" AI products they'd overseen the release of in the current zeitgeist.
I thought that renaming Active Directory to Entra ID was bad. Every single tech person who ever touched a Windows server knows what AD is. Then they change to name to something that sounds like it's going to give you an anal probe. What a dumpster fire...
Thank you for this. As someone who recently had to stumble back into turning a few knobs in (what I thought would be) AD for Office 365 licensing needs, after ~10 years outside of the MS sandbox, I had no earthly idea what Entra was. Until right now.
Good lord
https://news.ycombinator.com/item?id=42751726
Microsoft is the worst offender at renaming their products and services with such bad confusing names I don't think it's helping anyone, including Microsoft.
Imagine literally squandering the brand name "OFFICE"
Next they are going to rename Windows to Microsoft Copilot.
I keep missing meetings because I foolishly confuse Teams with Teams (classic).
I got unreasonably triggered by this.
You are not alone. It evoked a physical reaction in me
"To continue, please install the Microsoft 365 Copilot app"
I got this on mobile. Seems to be pretty apt.
Wait, what? Is this a joke? Did they really rebrand Office to '365 Copilot app'? I feel like I'm missing the plot, they can't be serious.
Yes, they literally did that. It's absolutely moronic. Whoever came up with this idea should be fired along with everyone else who greenlit it.
Don't worry, the other Office homepage says that it's actually been renamed to just Microsoft 365, not Microsoft 365 Copilot App:
https://www.microsoft.com/en-us/microsoft-365/microsoft-offi...
I got there by going to office.com and clicking Products > Microsoft Office. Lol. Rofl, even. This has made my day. And we all thought calling their third generation console Xbox One was the worst possible branding decision.
Oh. My. Fucking. God.
Are they aware that people will struggle to find if Office is installed and that they will keep calling it Office til the end of times (aka the next rebranding that will revert back things) anyway?
Microsoft has the worst branding in tech. Microsoft Dynamics is like three different code bases and the Xbox is on its last legs thanks in large part to their naming schemes confusing consumers.
Ha. Microsoft does it all the time.
https://news.ycombinator.com/item?id=40419292
Having established brand awareness is a double-edged sword. Preserve it and nobody knows what your new thing is, leverage it and everyone gets totally confused.
Ah yes, I call this "brandfucking."
IBM used to be a repeat offender. I recall trying to buy the WebSphere (Java) application server for a client and then finding out that IBM had slapped "WebSphere" on all their products including things like¹ MQ Series (a queue) and DB/2 (a database). It took me an hour to figure out the right item and it was an online purchase!
¹I might be misremembering the exact products but it was similarly absurd.
They were sticking “Watson” on all their product names for a while too.
Yep, and they got very overexcited about "VisualAge" for this, that, and the other at one point. "VisualAge for C++ for OS/2" being one of the more coherent examples I guess...
Probably the same one responsible for Office -> 365 naming
It gets worse
https://news.ycombinator.com/item?id=42751726
It’s because in Microsoft terminology a “copilot” is a chatbot or LLM agent.
So you get your copilot for m365 subscription and add copilot studio which you use to create co pilots
This almost makes sense, but it is certainly not how Microsoft marketing did things. "Microsoft 365 Copilot app" is a suite of productivity apps, most well known for Word, Excel, and PowerPoint. It was formerly known as "Office". Microsoft 365 Copilot app includes Copilot as one of the apps.
This is all information taken from office.com, not some joke or exaggeration...
Can confirm - I'm looking at my Android phone now; the "Office Hub" app I knew as "Office" or "Microsoft 365" has, at some point, renamed itself to "M365 Copilot". To make things more obvious and less confusing, it's sitting right next to an app named "Copilot", which is their ChatGPT interface, and as far as I can tell, doesn't do anything copiloty with the Office suite.
Looking at the two side by side in an app manager, I see:
- Copilot - com.microsoft.copilot
- M365 Copilot - com.microsoft.office.officehubrow
And they both have identical icon, except the latter has a tiny black rectangle with tiny white "M365" label tucked in the corner.
It's amazing to me how too much marketing education and/or experience seems to rot the brain. You learn on like day 4 of Marketing 101 that your brands should be distinct and recognizable, and hopefully some solid tips on how to do that. Cool. Solid. Seems obvious but there's plenty of things that seem obvious in hindsight that education can help you with.
Somewhere between that and a master's degree and 10 years at a prestigious marketing firm, though, apparently there's some lessons about how you should smear all your brands all over each other in some bid to, I presume, transfer any good will one may have had to all of them, but it seems to me that they could stand to send those people back to MKT101 again, because the principle of labeling what your product actually is seems to elude them after Too Much Education.
Think is, it's the latter lessons that are correct, because the ultimate arbiter of which marketing practices work or not is the market itself.
If anything, Marketing 101 works as a scaffolding but you learn the real lessons later on (basically like with every other vocational training wrapped in a degree, including especially computer science) - but also, and perhaps more importantly, it serves as a fig leaf. You can point to that and say, it's a Science and an Art and is Principled and done well It Is For The Good Of All Mankind, and keep the veneer of legitimacy over what's in practice a more systematized way of bringing harm to your fellow humans.
Also specifically wrt. brands - brands as quality signals mostly died out a decade or more ago; mixing them up is just a way to get their decaying corpses to trick more people for a little longer.
I see. Still worked for me.
Openned: https://copilot.microsoft.com
Same prompt: ffmpeg command to convert movie.mov into a reasonably sized mp4
https://i.imgur.com/CuaxIlL.png
Yeah it’s really annoying how opaque they are about the model there. Always just ”GPT 4 based” or ”GPT 4o based” if you dig in their blog archives. Makes one unable to check it against benchmarks or see when it’s updated. Setting expectations. Is it a distill? Lower precision quant? An old revision? Who knows.
Microsoft has really screwed up on branding yet again. Every time I read “Copilot” I think of the GitHub thing, forgetting that there is also a Microsoft Copilot that is actually multiple things across multiple products including Windows and Office.
It’s also a website like ChatGPT apparently? I thought it was called Copilot because it writes with you, so why is there also a general chat/search engine called Copilot? Jesus.
I think you may be confusing Microsoft Copilot with Microsoft365 Copilot? The first doesn’t give you access to Microsoft Copilot Studio but that might also be available with Microsoft Copilot Pro.
My confusion has only increased.
They have even renamed Office to Microsoft 365 Copilot. Yes. Microsoft Office.
https://www.windowslatest.com/2025/01/18/microsoft-just-rena...
Yeah that's not going to work.
Someone within Microsoft really needs to forward this entire thread over to their marketing department.
I believe you might be confusing Copilot Search with Copilot+? Which is of course different from Copilot Pro though not necessarily entirely distinct.
And Sam Altman thought that they were bad at naming things. Good thing they were bought up by the king of naming things. /s
Edit: They are doubling down on bad naming conventions so hard that it makes me think it's some kind of dark pattern sales strategy..
Classic HN psychology would say: Occam's razor would suggest mediocrity before an elaborate dark pattern scheme.
And I would agree with them in this case.
I agree with you but i also think mediocrity and semi accidental dark patterns can go hand in hand. In a "if it sells don't fix it" kind of way.
You just made me realize Copilot does not always refer to (Microsoft's) Github Copilot ... apparently.
I call the GitHub Copilot "Copilot" and the Microsoft Copilot "Bing/Copilot." I address it as that in my prompts. It works pretty well for me.
[dead]
This is pretty interesting, I had a very similar experience with Github Copilot's plugin inside Jetbrains IDE today (set to use 4o). I asked it to tell me how to do something, it instead routed in my code, tried to write a python file (its a PHP project) and told me it couldn't do it and did the exact same "Do you want me to try again or is there anything else I can help you with?"
Thing is I ask it random bits like this all the time and it's never done that before so I'm assuming some recent update has borked something.
ohh wow, that's bad, just tried this with Gemini 2.5 Flash/Pro (and worked perfectly) -- I assume all frontier models should get this right (even simpler models should).
I'd be willing to bet a more clear prompt would've given a good answer. People generally tend to overlook the fact that AIs aren't like "google". They're not really doing pure "word search" similar to Google. They expect a sensible sentence structure in order to work their best.
Maybe, but this sort of prompt structure doesn't bamboozle the better models at all. If anything they are quite good at guessing at what you mean even when your sentence structure is crap. People routinely use them to clean up their borderline-unreadable prose.
I wish I had a nickle for every time I've seen someone get a garbage response from a garbage prompt and then blame the LLM.
I'm all about clear prompting, but even using the verbatim prompt from the OP "ffmpeg command to convert movie.mov into a reasonably sized mp4", the smallest current models from Google and OpenAI (gemini-2.5-flash-lite and gpt-4.1-nano) both produced me a working output with explanations for what each CLI arg does.
Hell, the Q4 quantized Mistral Small 3.1 model that runs on my 16GB desktop GPU did perfectly as well. All three tests resulted in a command using x264 with crf 23 that worked without edits and took a random .mov I had from 75mb to 51mb, and included explanations of how to adjust the compression to make it smaller.
There's as much variability in LLM AI as there is in human intelligence. What I'm saying is that I bet if that guy wrote a better prompt his "failing LLM" is much more likely to stop failing, unless it's just completely incompetent.
What I always find hilarious too is when the AI Skeptics try to parlay these kinds of "failures" into evidence LLMs cannot reason. If course they can reason.
I get better result when I intentionally omit parts and give them more playroom to figure it out.
Less clarity in a prompt _never_ results in better outputs. If the LLM has to "figure out" what your prompt likely even means its already wasted a lot of computations going down trillions of irrelevant neural branches that could've been spent solving the actual problem.
Sure you can get creative interesting results from something like "dog park game run fun time", which is totally unclear, but if you're actually solving an actual problem that has an actual optimal answer, then clarity is _always_ better. The more info you supply about what you're doing, how, and even why, the better results you'll get.
I disagree. Less clarity gives them more freedom to choose and utilize the practices they are better trained on instead of being artificially restricted to something that might not be a necessary limit.
The more info you give the AI the more likely it is to utilize the practices it was trained on as applied to _your_ situation, as opposed to random stereotypical situations that don't apply.
LLMs are like humans in this regard. You never get a human to follow instructions better by omitting parts of the instructions. Even if you're just wanting the LLM to be creative and explore random ideas, you're _still_ better off to _tell_ it that. lol.
Not true and the trick for you to get better results is to let go of this incorrect assumption you have. If a human is an expert in JavaScript and you tell them to use Rust for a task that can be done in JavaScript, the results will be worse than if you just let them use what they know.
The only way that analogy remotely maps onto reality in the world of LLMs would be in a `Mixture of Experts` system where small LLMs have been trained on a specific area like math or chemistry, and a sort of 'Router pre-Inference' is done to select which model to send to, so that if there was a bug in a MoE system and it routed to the wrong 'Expert' then quality would reduce.
However _even_ in a MoE system you _still_ always get better outputs when your prompting is clear with as much relevant detail as you have. They never do better because of being unconstrained as you mistakenly believe.
I think the biggest issue is M365 Copilot was sold as something that would integrate with business data (teams, files, mail, etc.) and that never worked out quite well.
So you end up with a worse ChatGPT that also doesn't have work context.
When you do have that work context MS copilot performs quite well. But outside of that usecase it's easy to see their model is pretty bad.
It absolutely does not perform well with work context.
Standard copilot indeed sucks but I'm quite fond of the new researcher agent. It spends much more time than any of the others I've tried, like Perplexity pro and openai.
From a one line question it made me a relevant document of 45 pages examining the issue from all different sides, many of which I hadn't even thought of. It spent 30 mins working. I've never seen Perplexity spend more than 5.
I won't't be surprised if they will significantly nerf it to save on computing costs. I think now they give it their all to build a customer base and then they nerf it.
Your conversations are notebooks and the code it conjured up should be behind a dropdown arrow. For visualization it seems to work fine (i.e. Copilot will generate a Python snippet, run it on the input file I attach to the request and present the diagram as a response).
In my experience Microsoft Copilot (free version in Deep Think mode) is way better than ChatGPT (free version) in most of things I throw at them (improving text, generating code, etc).
I put your exact prompt into Copilot and it gave me the command
ffmpeg -i movie.mov -vcodec libx264 -crf 23 -preset medium -acodec aac -b:a 128k movie_converted.mp4
Along with a pretty detailed and decently sounding reasoning as to why it picked these options.
It's been increasingly more obvious people on hacker news literally do not run these supposed prompts through LLMs. I bet you could run that prompt 10 times and it would never give up without producing a (probably fine) sh command.
Read the replies. Many folks have called gpt-4.1 through copilot and get (seemingly) valid responses.
What is becoming more obvious is that people on Hacker News apparently do not understand the concept of non-determinism. Acting as if the output of an LLM is deterministic, and that it returns the same result for the same prompt every time is foolish.
Run the prompt 100 times. I'll wait. I'll estimate you won't get a shell command 1-2% of the time. Please post snark on reddit. This site is for technical discussion.
even gemma3:12b gets it correct:
~> ollama run gemma3:12b-it-qat >>> ffmpeg command to convert movie.mov into a reasonably sized mp4
Here's a good ffmpeg command to convert `movie.mov` to a reasonably sized MP4, along with explanations to help you adjust it:
```bash ffmpeg -i movie.mov -c:v libx264 -crf 23 -preset medium -c:a aac -b:a 128k movie.mp4 ```
*Explanation of the command and the options:*
Even the 1B variant gave me that one, along with good explanations of the various options and what to play with to tweak the result.
i’m pretty surprised 1B parameters is enough for it to still fluently remember ffmpeg-fu
Yeah the newer small models continue to surprise me as well. I uploaded the full output from gemma3:1b-it-q8_0 here[1].
[1]: https://rentry.co/yu36i4d3
i asked copilot to make an excel command, that rounds up all numbers to the next integer... took 4 back and forth messages and 15 minutes until it was working... Google was 5 minutes
People are responding with..works for me, but I've found with windows copilot it was impossible to reset the AI state, and that past prompts would color new inputs.
The new chat, or new conversation buttons seem to do nothing.
So much for the quality testing done by Microsoft...
[dead]
I don't have experience with CoPilot, but I do with other LLMs. I'm not sure that omitting "provide me with" is enough to get the job done, generally, aside from being lucky that it correctly interprets the prompt. In my experience, other LLMs are just as prone to incorrect divination of what one means given telegraphic prompts.
I love Copilot in VSCode. I always select model "Claude Sonnet 3.7", when in Copilot since it lets me choose the LLM. What I love about Copilot is the tight integration with VSCode. I can just ask it to do something and it relies on the intelligence of Claude to get the right code generated, and then all Copilot is really doing is editing my code for me, reading whatever code Claude tells it to, to build context, etc.
That's a different, more useable copilot.
That's why I said "in VSCode" because I have no idea what this guy is running, but it's almost a certainty the problem isn't copilot but it's a bad LLM and/or his bad prompt.
The Copilot integrated with Microsoft 365 doesn’t have a model switcher it just is what it is. You are talking about a completely different product that Microsoft calls the same names.
I'll say it for the third time: "in VSCode". There's no ambiguity about which Copilot that is.
VSCode Copilot or Copilot Chat?
imo, any VSCode user needs both extensions: "GitHub Copilot" for inline completions, and "GitHub Copilot Chat" for interactive, multi-turn coding chat/agent.
This discussion isnt about that all, completely unrelated. It's talking about a chatbot.
And "GitHub Copilot Chat" VSCode extension is _also_ a chatbot.
But it's not Microsoft 365 copilot
--OR-- I mentioned a Copilot Product in a conversation about Copilot Products.
Claude Sonnet 3.7 is my default as well in Visual Studio. i have been playing with their new default GPT-4.1. its not bad.
I haven't tried GPT-4.1 yet in VSCode Copilot. I was using 'Claude Sonnet 4' until it was struggling on something yesterday which 3.7 seemed to easily do. So I reverted back to 3.7. I'm not so sure Sonnet 4 was a step forward in coding. It might be a step back.
First off, that’s a really bad prompt - LLMs don’t have this magic ability to read your mind. Second, despite how bad it is, Copilot just gave me the correct code.
[dead]
People think that llms are an excuse to be lazy. You have to put some effort into the prompt!
Ironically copilot is lazy, have to prompt it to death, whereas the others are aligned and actually provide answers with the same prompt.
(Perhaps copilot is not lazy, just stupid relative to its peers.)
No, you don't. ChatGPT easily answers this question posed verbatim.
Microsoft has wasted their opportunity.
When ChatGPT first came out, Satya and Microsoft were seen as visionaries for their wisdom in investing in Open AI. Then competitors caught up while Microsoft stood still. Their integration with ChatGPT produced poor results [1] reminding people of Tay [2]. Bing failed to capitalize on AI, while Proclarity showed what an AI-powered search engine should really look like. Copilot failed to live up to its promise. Then Claude.ai, Gemini 2.0 caught up with or exceeded ChatGPT, and Microsoft still doesn't have their own model.
[1] https://www.nytimes.com/2023/02/16/technology/bing-chatbot-m...
[2] https://en.wikipedia.org/wiki/Tay_(chatbot)
I'll add, that Google search AI integration is quite good. I'm actually amazed how well it works, given the scale of Google Search. Nowadays I don't click search results in 50% of searches, because Google AI outputs response good enough for me.
Maybe we have a different Google AI down here in south Texas, but the Google search AI results I receive are laughably bad.
It has made up tags for cli functions, suggested nonexistent functions with usage instructions, it’s given me operations in the wrong order, and my personal favorite it gave me a code example in the wrong language (think replying Visual Basic for C).
The AI Overviews (on the main SRP) is pretty hit or miss. The new "AI Mode" (separate tab) is _very_ good.
It cracks me up that I can only find animated marketing bs pages about this that show nothing of interest, but I can't actually find how to use it despite minutes of looking.
Well done Google Marketing, well done.
Another product carefully kept away from the grubby little hands of potential users!
They even disabled it if you didn’t use the right combination of browser and willingness to share your data.
Seems a lot more like general availability on my end now, though, these past few days. One can try Google dot com, slash AIMode.
Not nearly as good as using Gemini 2.5 pro which they do offer for free but I forget where. AI studio? So many ways to access it.
Its about half and half. Really depends on whether there are good results that gemini can summarize. If not, it gets creative. Chatgpt is generally much better.
ChatGPT is better, but Google owns all of the panes of glass (for now).
We've never seen a "Dog Pile vs Yahoo" battle when the giants are of this scale.
It'll be interesting to see if Google can catch up with ChatGPT (seems likely) and if they simply win by default because they're in all of the places (also seems likely). It'd be pretty wild for ChatGPT to win, honestly.
People are forming deep personal attachments to it. They think all their chat history is in context and Act as if it knows them personally and has formed an opinion about them. They are replacing social interaction with it. I doubt someone in that deep would want to switch to something new very easily.
I doubt that's a very high percentage of users. Most people use it as a productivity-boosting tool like search engine.
I think a bigger issue is people just believing all the stuff that AI tells them, without bothering to check it.
It's a spectrum.
A lot of people who are unfamiliar with how the technology works talk about "my GPT". Google that phrase, or start watching for it to crop up in conversation.
On the other end of the spectrum, there are lots of tiny little pockets like this:
https://www.reddit.com/r/MyBoyfriendIsAI/
this is the first I've heard of anyone praising it... the results are usually outright wrong or useless.
A lot of folks probably just assume it's correct
My buddy learned this last week when we went out of the way to get gas at a wholesale store and he swore he looked it up and claimed it was open late. Well, it wasn’t.
Agree, I've seen enough wrong answers that I think it's actively harmful to put AI answers at the top of Google search results.
The problem is that they made huge time consuming investments in technology to make copilot work with the various O365 controls, then confused everyone by slathering copilot on everything.
Microsoft hired the infamous guy from Inflection AI and fired the one responsible for Bing Chat which was actually good and it's all downhill from there. Bing Chat actually made Google nervous!
I still remember "You have been a bad user, I have been a good Bing". It was refreshing to see a chat with some personality at the time.
Even with their failures Microsoft still has OpenAI over a barrel.
Access to their IP, and 20% of revenue (not profit).
Firing the antitrust cannon can deal with that.
Altman will absolutely attempt this.
Would love to see how that plays out. It’s a pretty absurd situation to eagerly sign the deal and take the funding and then when better deals start showing up turn around and try blow it up.
Can you give a simple description of what the antitrust case would be?
I think the complaint would be two things, however IANAL
1. Lack of access to compute resource. Microsoft intentionally slowing OpenAI's ability to scale up and grow dominant quickly vs. Copilot, a competing product. Microsoft shouldn't be able to use it's dominance in the cloud compute market to unfairly influence the market for consumer AI.
2. Microsoft should not automatically gain OpenAIs IP in domains outside of the AI offerings that the company was supplying when the initial agreement was made. If it must be upheld the terms of the contract mean Microsoft get all of OpenAIs IP, then it block OpenAI from competing in other markets eg. Windsurf vs. VS Code.
Probably but it might not matter. They don't really need to compete on quality, just the simplicity of selling a suite that's bundled together to enterprise in the same way they did with Teams which is inferior to Slack in pretty much everyway (last time I had to use it anyway). Isn't their advantage always sales and distribution? Maybe its different this time, I don't know.
The biggest problem with Microsoft is their UX. From finding out where to actually use their products, to signing in, wading through modals, popups, terms and agreements, redirects that don’t work and links that point to nowhere. Along the way you’ll run into inconsistent, decades old UI elements and marketing pages that fully misunderstand why you’re there.
It’s a big, unsolvable mess that will forever prevent them from competing with legacy-free, capable startups.
They should delete all their public facing websites and start over.
Bill Gates agreed with you 20 years ago :-) (this email never gets old)
https://www.osnews.com/story/19921/full-text-an-epic-bill-ga...
Thanks. That was a great read. Somehow missed that. Two points to make:
1. Not sure why osnews charactarised this as an "epic rant". I thought he was remarkably restrained in his tone given both his role and his (reasonable) expectations.
2. This to me shows just how hard it is for leaders at large companies to change the culture. At some point of scaling up, organisations stop being aligned to the vision of the leadership and become a seemingly autonomous entity. The craziness that Bill highlights in his email is clearly not a reflection of his vision, and in fact had materialised despite his clear wishes.
When we think about how "easy" it would be for the executive of a large organisation to change it, those of us not experienced at this level have an unrealistic expectation. It's my belief that large organisations are almost impossible to "turn around" once they get big enough and develop enough momentum regarding cultural/behavioural norms. These norms survive staff changes at pretty much every level. Changing it requires a multi-year absolute commitment from the top down. Pretty rare in my experience.
That was epic. The type of email we all dread to receive at work. Can’t fault Bill for his detail though, most of those kind of emails are “website slow, make fast”.
That email is a gem.
> When SeattlePI asked Bill Gates about this particular email last week, he chuckled. “There’s not a day that I don’t send a piece of e-mail… like that piece of e-mail. That’s my job.”
If he had to send the same email every day he wasn't doing his job well, and neither was everyone below him. Even a fraction of that list is too much.
It's not only public facing websites - Azure is also pretty inconsistent and lately any offer to preview a new UI was a downgrade and I happily reverted back - it's like they have a mandatory font and whitespace randomizer for any product. Also while far from a power user I've hit glitches that caused support tickets and are avoidable with clearer UX. Copilot in Azure - if it works at all - has been pretty useless.
Their UX, their naming conventions from products to frameworks and services, pulled plugged on products, user hostility and so on are all pointing out the root of the problem is elsewhere. I think Microsoft is no longer reformable. It is a behemoth that will probably continue to coast along like a braindead gozilla zombie that just floats due to its sheer size.
That's the feel I get too :/
Too many crazy presentations on 'data' that are calling the calling the sky purple and everyone just nods along, ok's and gives promos all around.
Those stupid dialogs that may you think they will help you solve an issue but actually just waste 5-10mins "scanning" just to link you to irrelevant webpages that sometimes don't exist.
Wonder why they’re going so slowly…
(& small typo, “Proclarity” = *Perplexity)
The lack of a true first-party model is glaring now that everyone else is racing ahead with their own stacks
How have they failed? They still get 49% of openAI’s profits so if openAI wins, Microsoft wins.
That can be considered victory only if Microsoft is an investment firm as opposed to a software company.
> That can be considered victory only if Microsoft is an investment firm as opposed to a software company.
An investment vehicle would be more accurate, but that's the primary function of every broadly-held publicly-traded firm.
Aren't they actually an investment firm with a passing interest in software?
Maybe you should actually read one of their earnings reports. They don’t make $250B in annual revenue because of a “passing interest in software”.
You’re contradicting yourself with that statement. Microsoft is seen as a mercenary… yes they make a lot of money, that’s proof they’re a mercenary. If you want to prove they are not then point to software categories they invented, not how much money they are making.
They're like the Merck of the pharma world.
Fair point
The biggest issue with Copilot might not be the model itself, but the naming strategy. One name is used for several completely different products, and users end up totally confused. You think you're using GitHub Copilot, but it's actually M365 Copilot, and you don't even get to choose the model. Microsoft really needs to make this clearer.
You probably are not a customer as a decision maker in a big traditional company/organization. MS is obfuscating on purpose so they can say in sales decks that if you buy this, you get all these copilots and your Fortune 1000 business is AI-proof. What they are left out is that not every copilot is equal.
This worked very well for IBM Watson.
"Nobody ever got fired for buying IBM" is a quote your grandparents used.
What doesn't work anymore for IBM still certainly works for Oracle and the rest of the sales-driven tech giants.
That's a good point.
And us plebs working for the company are left to deal with the inferior tool.
Insert "No, We have copilot at home children" meme here.
For some reason I had also gotten the impression that Copilot was powered by OpenAI in some way. Perhaps the Microsoft OpenAI partnership gave me that impression.
I also wasn't aware that there where an OpenAI/Microsoft rivalry, I had the impression that Microsoft put a lot of money into OpenAI and that ChatGPT ran on Azure, or was at least available as an offering via Azure.
Copilot is powered by a Microsoft-hosted version of OpenAI's models. If you ask it, it says "I'm based on GPT-4, a large language model developed by OpenAI. Specifically, you're chatting with Microsoft Copilot, which integrates GPT-4 with additional tools and capabilities like web browsing, image understanding, and code execution to help with a wide range of tasks."
OpenAI's models are also available via Azure.
>Microsoft really needs to make this clearer.
LOL. We're talking about the company that used to slap a non-sensical .NET suffix on everything.
And they’ve renamed Office pretty much every year so I’m not even sure what it’s called any more, Microsoft Life or something.
It’s so strange that they keep renaming because Office (and office.com) is perfectly usable.
And now it slaps "xbox" on everything.
Have you ever used copilot? Its Garbage with a capital G. I dont think its even as useful as GPT 3.
Copilot is brainrot and its killing microsoft.
Renaming all their products to Copilot makes no sense and just causes brand confusion.
Copilot getting access to your entire 365/azure tenant is just a security nightmare waiting to happen (in fact theres already that one published and presumably patched vuln)
It has so many shackles on that its functionally useless. Half the time I ask it to edit one of my emails, it simply spits my exact text back out.
Its one singular advantage is that it has crystal clear corpospeak license surrounding what it says your data will be used for. Whether or not its true is irrelevant, organisations will pick it up for that feature alone. No one ever got fired for choosing ibm etc.
Classic big vendor play: it's not about being good, it's about being safe to buy
You will need way more to kill Microsoft. Brand confusion has been in the Microsoft DNA since the very beginning
True. I do mean more like "eating it from within" rather than "This will be written on its tombstone"
That would be pretty funny to be honest :) I can imagine a very deep quote, signed “Copilot 360 Business+
> Renaming all their products to Copilot makes no sense and just causes brand confusion.
This reminds me of IBM Watson back in the day
Thats WatsonX for you now.
Yeah thats pretty apt.
I use many LLM tools (ChatGPT, Claude, Gemini, GitHub Copilot, etc), I have never ever gotten any version of MS Copilot to do anything useful for me. I've been stunned at how they can use the same models that ChatGPT does, copy their use cases, and still deliver a turd.
The Github Copilot (in VS Code especially) is the only application of LLMs that I've found useful from Microsoft. I would have loved amazing Copilot support in Word for working on a large complex document, but I haven't found that to work well.
YMMV, but I found it useful for drafting a pull request on GitHub, where it basically just did all the boring work, including finding the particular line in a large codebase that was throwing the error. It wasn't a hard problem, but it still would have required a bit of mental effort on my part, and I'd rather spend that reading a book.
I've come to the conclusion that once companies get big enough, they are unable to build anything really useful. I'm sure there exceptions, but it feels like 99% of the time this is true. The best they can do is acquire a company and hope that goes well.
same
msft had a massive edge. it had exclusive access to models + had web search before anyone.
they flopped this royally, just like windows mobile. they created a shitty ux by shoving it inside the bing app, then they decided to charge for it instead of capturing all enterprise value.
lastly, the product has stalled and missed on their biggest opportunity which is tapping into the data. you can think it's because of how complex it must be, but then openai and everybody else did it.
it's truly a lesson in product mismanagement, once again, from microsoft
It’s not all over yet.
MSFT is the world’s best 2nd mover and can often make profits on ideas pioneered/launched by other companies before MSFT.
MSFT came late to the cloud party (2011? AWS launched in 2006 IIRC), still they became a big player there (~25% market share in 2025).
Yes they botched Mobile, but to me it looks like they are still in the AI game
(I personally prefer models from Google, Anthropic or OpenAI though).
Just say the quiet part out loud, Microsoft is such a large anti-competitive company they literally don't have to build competing products. Customers are not even going to evaluate other options because it will be bundled with their other mediocre services.
Microsoft's problem is they still make products. Meta creates $70 billion net profit per year and they don't create much and have minimal overhead.
It was bound to happen. Corporations always commit suicide once they're successful and it almost always looks the same. It's why I don't invest time in non-free software.
Netflix and NVIDIA come as counter-example. We're looking at 20-30 years of continuous self-disruption and innovation here.
I loved my Windows Phone UX, so simple and reactive.
If only MS copilot was an actual co-pilot in my daily flight of work… but no, it cannot actually draft an email, create planner tasks or book meetings. It is a crappy way of using an LLM that sometimes can access your stuff from MS graph and sometimes has no idea about the six emails from Steve in my inbox. And no, its PowerPoints are not good either, they are LinkedIn in ppt-form.
Here are the results I just got.
1. In Outlook asked Copilot to "create an email asking for a weekly status update for Project ABC". Iterating a bit, I asked it to be more casual and shorter. Here's what I got:
That seems pretty on point. Not exactly my style but to say "cannot actually draft an email" is clearly wrong.2. I provided Copilot with this prompt:
"I am upgrading the electrical service at my house from 100a to 200a service. Create a project plan in Planner for this project."
Here's the response I got:
It did produce a working script that required 1 configuration edit. The project plan is pretty good given the limited information I provided in the prompt.3. I provided Copilot with this prompt:
Not a very good response:It seems like these examples show that it has about the same capabilities as a basic chat interface like Claude or ChatGPT without any benefit from being integrated with the Microsoft ecosystem.
You nailed it. Microsoft should have a huge advantage with depth of integration, but for some reason treats Copilot in office as a glorified chat iframe. It's a huge missed opportunity.
If you ask it to 'send' an email instead, as I did the first time I saw it pop up (in my email client...), my first real use case, it will tell you it can't, just like your calendar example.
Even 'draft' really, if you forget what you've done, close your eyes and think about it from scratch, surely you expect that to be drafted in your ...drafts?
That first email is confusing. If I received that email I'd assume my manager was going to be the one sending out the updates.
It needs to be a lot more clear and direct about the expectations of others.
"I'd like us to do X" is super passive, and a thousand miles from "You must do X"
What a useless exercise. OP was saying the AI just does text output, and cannot DO any of these things.
These are trash.
1) "Iterating a bit". Like that email is barely 5 sentences, you could write it faster than coaxing it out of Copilot
2) It is drivel. Could have stopped at *Consult with a licensed electrician
3) Well...
Cool, so you've spent just as much time "iterating a bit" as you would have done typing, your thinking and typing skills have atrophied a bit more, and you've made your colleagues lives that bit duller by sending them something written by the "average person".
and all in 4:3! Why, why microsoft.
Is anyone here not confused about how Copilot licenses work (free vs paid) and how to choose models for different types of task?
I'm confused over what anyone means when they say "Copilot", since it could mean the VS Code editor features or various features on github.com or the thing that Microsoft sell as part of their 365 office software.
I think this article is about the 365 suite.
Don't forget about the Copilot in Windows, which is different from the Copilot in Bing, which is different from Copilot in Edge, which is different from the Copilot in Copilot Studio... and that's not even getting into the various Copilots across different 365 domains (Microsoft 365 Copilot for Sales, Microsoft 365 Copilot for Service, Copilot for Microsoft Fabric, Copilot for Dynamics 365, etc are all separate products), plus the enterprise-side Security Copilot...
Good old Microsoft naming. I'll never understand how they can think it's a good idea to release multiple entirely different products and call them all variations of the same thing. One would think they would have solved this problem a decade ago and yet every few years it happens again.
I love how Microsoft will have three products that compete with themselves, e.g. Lync and Skype and Teams all coexisting at the same time.
Someone thought this would reduce marketing budget, since they now cross-promoting.
Yeah I'm talking about the thing you see when you go to https://m365.cloud.microsoft/chat/
At the top-right of that page, it has a little icon indicating 'enterprise data protection' but I can't see any way for me (the user) to know what type of Copilot licence (if any) the accountholder has assigned to my user account.
If you have the fancy copilot pro, you'll see it in the rest of your office account, such as outlook, where additional features are available such as email summarize etc.
Such an easy URL too
There's also "Copilot" which is the AI assistant accessible online and via a desktop app on Windows (and even other OSs)
Your description also works for GitHub copilot.
As with many Microsoft products licensing is the most complex part of it.
Yes! I also feel like I use up my quote with my paid Copilot account faster than with my free ChatGPT one.
Just ask Copilot
Microsoft just did a rug pull by introducing dramatically reduced rate limits on copilot requests for paying customers too.
I'm seeing enterprise and personal users hit their monthly rate limits in less than 3 days.
And Windsurf's free tier is almost as good as Copilot in my experience.
Ironically Windsurf is also owned by Microsoft indirectly...
It's not just Microsoft. All of these companies competing in "AI coding" went to having "premium" requests when using bigger models and then unlimited usage with okayish models.
Why is this being downvoted? I’ve seen similar behavior, and it’s not outside the realm of possibility that MS would choose meager context windows and API limits in favor of profit.
A lot of discourse about Microsoft that paints them in a less than positive light gets downvoted without reason, I read a blog post a while back about their "brand reputation farm" that follows social media posts and tries to de-rank or drown out content. If I find the link I'll update this comment.
I'm not sure whether Microsoft Copilot and ChatGPT use different system prompts or if there's something else behind it, but Copilot tends to have this overly cautious, sterile tone. It always seems to err on the side of safety, whereas ChatGPT generally just does what you ask as long as it's reasonable.
So it often comes down to this choice: Open https://copilot.cloud.microsoft/, go through the Microsoft 365 login process, dig your phone out for two-factor authentication, approve it via Microsoft Authenticator, finally type your request only to get a response that feels strangely lobotomized.
Or… just go to https://chatgpt.com/, type your prompt, and actually get an answer you can work with.
It feels like every part of Microsoft wants to do the right thing, but in the end they come out with an inferior product.
I think that must be it. The system prompt is likely it.
Just yesterday was I talking to a customer who was so happy with our "co-pilot" compared to ChatGPT and others that he wants to roll it out to the rest of the company.
We use Azure-OpenAI + RAG + system prompt targeted at architects (AEC). It really seems the system prompt makes all/most of the difference. Because now, users will always get answers targeted towards their profession/industry.
I wonder if a Lexus/Toyota Acura/Honda Lamborghini/Audi OpenAI/Microsoft marketing split isn't in the best interests of tech giants going forward since LLMs are nondeterministic, unlike the deterministic nation-states they've built up till now...
Why is Microsoft special?
ChatGPT. Perplexity. Google AI Mode. All let you get a message off.
… WAIT! copilot dot microsoft dot com lets you just send a message without logging in.
—
heh, the second result on DuckDuckGo is an MS article: “What is Copilot, and how can you use it?”
Products mentioned in the article, they say:
| Copilot | Copilot app | Copilot for individuals |
And a link for each one. Does Satya squirm when he sees that, but doesn’t have the power to change it?
Also the word “individuals” (allegedly previously mentioned) appears only once on the page.
If they want to hurt Microsoft where it hurts, OPenAI should build an agent to write mark-down, docx, and html versions of documents from simple chat or audio prompts. Imagine dictating your documents to AI and have it build upload and convert the document into relevant file formats..... I can't wait!!
My enterprise onboarded Copilot and Copilot agents and it’s fairly successful.
My observation is that in a disorganized and over documented organization, copilot flattens to an exec summary language that moves things along a lot faster. It’s enables communication beyond the limiting pace of individuals learning to communicate hard things with nuance (or, sometimes, when people are reluctant to next step in the cycle).
It lifts to a baseline that is higher than before. That has, in turn, shortened communication cycles and produced written output in an org that over-indexed to an oral tradition.
I prefer Gemini Pro 2.5 - Google is really fumbling the bad by not having a solid subscription access model for it (plus some CLI coding agent) and enterprise access.
Google announced a Gemini CLI tool a few hours ago! And then ... they unpublished everything shortly after? It's weird: https://news.ycombinator.com/item?id=44373754
Damn the announcement looks awesome - hope they bring it back without downgrades!
Doesn't Microsoft own 49% of OpenAI and receives 20% of the revenue (according to ChatGPT)? In which case, what is Microsoft so upset about?
In hindsight, MSFT made out like a bandit in that deal since OpenAI seemingly tapped out of MSFT resources already. MSFT can always spin up its own LLM. It’s not that expensive for them and they can wait it out for tech to mature a bit more.
That they only receive 20% of the revenue.
The issue is more about control and positioning
the other 51%
The problem is it is very hard to make changes and build innovative new products within big tech, at a pace to compete with smaller companies. Big tech succeeds despite it since the resource disparity is too much.
Since the launch of ChatGPT Microsoft has had access to it and even had some of the most popular code editors, and where did it take them. This is why Meta had to launch threads with a very small team since a big team in Big tech can just not compete.
Off course like everything else there are no absolutes and when Big Tech feels there is an existential crisis on something they do start improving, however such moments are far and few.
Our management introduced copilot last year, there was some mild hype, people were curious, gave it a spin, but it didn’t stick around in many conversations.
Now that everyone has access to Claude and claude-code, Copilot barely gets mentioned anymore. Maybe this wave dies down or they improve it, anyway these tools still have a long long way to go.
I think Microsoft underestimated how much friction kills adoption
I read today that OpenAI is planning a ‘AI super app’ that would have canvas, word processing, etc., all in one work app. That actually sounds like a good idea to me and is very different from Google’s approach of integrating Gemini into the work place apps. Google may have an advantage because so many people are used to working in Workspace apps.
Mind providing a link? I may or may not be creating the exact same thing...
Also if anyone from OpenAI or any of its competitors wants to talk my email is on my HN profile ;-)
https://www.theinformation.com/articles/openai-quietly-desig...
Thank you! Sadly as a struggling entrepreneur I do not have $299 to blow on one article. I'll take it as validation that my idea has legs... and I'm likely slightly ahead of them
That's the app many of us are creating and I bet some of them are going to be much better than the OpenAI one
Tamo junto
I don’t understand how it’s not more useful to most people with copilot subscriptions in work. It has access to my works OneDrive, it really should be the most commonly used LLM
Copilot 365 conserves tokens a lot which is why results are bad.
“awareness in the consumer space doesn’t necessarily translate into fit for use in the commercial space.”
Man they sound like a mainframe manufacturer at the dawn of the PC era.
Aside from the product value and market sentiment around M365 Copilot. One should wonder about the timing of this article so close to Microsoft's fiscal year end.
I have a feeling a lot of "success" of OpenAI in the enterprise space is simply nepotism or in-network tech mafia buddies migrating from diversity hires to ChatGPT subscriptions.
Data coming from from this discussion thread and my team's experience at work paint a different picture.
ChatGPT simply is a much better product all around. Period.
I've been using it to automate some very basic html/js stuff and it's always forgetting context and stomping on old stuff that's not related to the prompt. I guess this is just leaky abstraction of its in context memory compression. It's manageable but all it's doing for me is allowing me to be lazier. It certainly isn't making me any more productive.
If I try to get it to do stuff outside my domain expertise it's making errors I can't catch. So I suppose if move fast and break things works for your business, then that's fine.
But that begs the question, a much better product than what?
Either way, we saw them fire a bunch of people and "replace them with AI," so it's not out of the question this is a more toward "AI tech leadership" tax subsidization as DEI is phased out.
We're paying for Copilot for Office365. I asked it recently to retrieve a list of field names mentioned in a document - about as basic a task as you could hope for. It told me it couldn't do so.
My precise request: "Extract the list of field names in Exhibit A."
Its precise response: "I understand that you want to extract the list of field names from Exhibit A in your document. Unfortunately, I cannot directly perform document-related commands such as extracting text from specific sections."
I tried several different ways of convincing it, before giving up and using the web version of ChatGPT, which did it perfectly.
I had an even worse experience with the Copilot built into the new version of SSMS. It just won't look at the query window at all. You have to copy and paste the text of your query into the chat window ... which, like, what's the point then?
I only used free Microsoft Copilot once back when GPT-4 came out and it wasn’t free on OpenAI yet. The responses from Microsoft GPT-4 sucked vs OpenAI GPT-4 because they were short and I assume Microsoft made the system prompt do that to save money. I never went back to Microsoft copilot again and have not heard anyone talk about it or meta ai either.
The tension between being a partner and a competitor here was bound to bubble up
its not a rivalry if one party is not in the competition, its just jealousy
Because of the partnership with OpenAI, I always assumed Copilot was just built on top of GPT.
So how did MS make Copilot Suck, if it started with same base?
Microsoft Copilot uses their own model that is originally based on GPT-4 if I’m not mistaken.
But, it’s mostly a RAG tool, “grounded in web” as they say. When you give Copilot a query, it uses the model to reword your query into an optimal Bing search query, fetches the results, and then crafts output using the model.
I commend their attempt to use Bing as a source of data to keep up to date and reduce hallucinations, especially in an enterprise setting where users may be more sensitive to false information, however as a result some of the answers it gives can only be as good as the Bing search results.
Bing isn't terrible though, is it? DuckDuckGo uses it, or at least used to, and that trade off was fine most of the time.
It’s not necessarily terrible. It just sometimes leaves you wishing it was “smarter”. When I get a bad result, trying the same query on ChatGPT gives a much better response.
Seems like with their own resources, and also owning part of GPT, they should be able to pivot and at least make a programming tool equal to Chat GPT.
Real talk! Copilot is so bad. It’s literally useless. And they charge an absolute arm for it. Like how is it soooo much worse than Chat? I am a frustrated monky when I use Copilot.
Microsoft's decision to name this product Copilot has to be the result of some form of internal sabotage, I refuse to believe otherwise.
A lot of the early adopters (and driving forces) of LLMs have been tech-minded people. This means it's quite a good idea NOT to confuse them.
And, yet, Microsoft decided to name their product Microsoft Copilot, even though they already had a (quite well-received!!) Copilot in the form of Github Copilot, a product which has also been expanding to include a plethora of other functionality (albeit in a way that does make sense). How is this not incredibly confusing?
So what actually _is_ Copilot? Is there a bing copilot? A copilot in windows machines? Is it an online service? (I saw someone post a link to an office 365)?
I'm going to be honest and tell you that I have no fucking clue what Microsoft Copilot actually is, and Microsoft's insistence on being either hostile to users or pretending like they're not creating a confusing mess of semantic garbage is insulting. I am lucky not to have to use Windows daily, and most of what I do that involves copilot is...Github Copilot.
I am knee-deep into LLMs. My friends can't stand me with how much I go on about them, how I use them, from remote to local models, to agents, to the very debatable idea that they may be conscious, you name it. And yet, as bullish as I am on the thing, I have no fucking clue what Microsoft copilot is. Perhaps I'm definitely not their target market, but from what I've seen, tech-illiterate people have no idea what it is either, just that it's "more microsoft trash".
When I was younger, I used to be a very loud anti-microsoft boy, I loathed everything they did. Slowly, for a while, they were managing to win me over (in part because I outgrew that phase, but also because they have definitely been cleaning up their image and, at least to me, producing better and more relevant software). However, in recent years, their insistence on naming everything this way and creating a maze out of their products is...baffling. I feel myself not being able to stand MS again.
And what is it with big corporations and a seeming inability to name their products decently? This is appalling. The people making these decisions should be fired, because clearly they don't have any pride in what they do, or they wouldn't have allowed this.
Get your shit together, microsoft!
> Microsoft's decision to name this product Copilot has to be the result of some form of internal sabotage
If you look at this in isolation, yes. If you look at this historically, it's totally on-brand for Microsoft. Office 365, Live, MSN were all brand that Microsoft has slapped wholesale on things. Microsoft has always been reactive when it comes to branding, rather than proactive.
How are managers going to earn their keep if they don't rebrand and re-org?!
I'm reminded of when .NET was released suddenly everything was .NET, even an office release was named after it. Then it finally narrowed down into the programming languages we know and love or hate depending on your vibe. I assume this will happen here too eventually.
Everything is Copilot, but they're all different products, and one of them is just a launcher to Office apps, each with their own assistant called Copilot
Copilot is a vibe.
You don't "use Copilot". You Copilot. Everything is Copilot. Windows, Office, PCs, Bing. All Copilot.
Unless you mean Copilot (Classic).
So we have copilot included in our MS license. The problem is that its the dipshit version. We can't use extended thinking.
But copilot free allows me to use extended thinking. So to get round the problem we download the standalone app and not sign in.
It seems wild that the free version is better than the paid version.
The problem is Coilot is dumb. Allegedly using the same models ChatGPT does, but Microsoft seems to have done something to Copilot which lobotomises it so badly it's unusable for anything serious. Great for the MS ecosystem integration, but as a general purpose tool, it's nowhere near ChatGPT.
> It’s unclear whether OpenAI’s momentum with corporations will continue, but the company recently said it has 3 million paying business users
That's the only data point the article has, and it is incomplete (no Copilot numbers).
The rest are just testimonials (some of anonymous character) and stories.
Who's having more success then? No one knows. It's up to the reader to decide.
Looks like made-up rivalry article to me. Draws clicks, no actual content inside.
[dead]
[dead]
[dead]
[flagged]
Agreed. It’s pretty gross tbh
Can you explain what you mean?
Yet again, Microsoft can't exist without buying what ALREADY works, this is sad
I used to like Microsoft, i now despise them, and the more you dig, the more shady stuff emerges from that supposed 'company'
I program at a non-tech Fortune 100 company. Our team is on a pilot program to try out AI-assisted programming at the company, and Cursor with OpenAI models are mostly what we are using. I have it integrated into my standard IDE workflow and try to write unit tests and the like with it.