The original edit.com, from around dos 6.22 (and later 7.0, ie. win95) was my first IDE. Well, I started with qbasic, so I was fairly familiar with it as it was similar (or same?), but when I started learning C/C++ with djgpp, I just continued using edit.com.
My "project file" was `e.bat` with `edit file1.cpp file2.cpp file3.cpp`, as it was one of the few editors that I knew that had a decent multi file support with easy switching (alt-1,2,3 ..). I still continue remapping editor keybindings to switch to files with alt/cmd-1,2,3,.. and try to have my "active set" as few of the first files in the editor
It wasn't a great code editor, as it didn't have syntax highlighting, and the indent behaviour wasn't super great (which is why in my early career had my indent was two spaces, as that was easy enough to do by hand, and wasn't too much like tab). But I felt very immediate with the code anyway.
I knew that many others used editors like `qedit`, but somehow they never clicked with me. The unixy editors didn't feel right in dos either.
Quickly trying this, it doesn't seem to switch buffers with the same keybindings, even if it does seem to support multiple buffers.
You should raise that as an issue. If things like that get in early enough, they get heard.
And it wasn't just similar. It was literally the same. EDIT.COM simply started QBASIC up with a special flag. One could just run QBASIC with the flag. As I said at https://news.ycombinator.com/item?id=44037509 , I actually did, just for kicks.
It may not have had syntax highlighting, but it did have syntax capitalization (for lack of a better term?). If you typed a line in all lowercase, after hitting enter it would automatically uppercase the reserved words. It wasn't much, but it helped
I remember using edlin a lot in my early computing days. It was murder to learn but once you knew how to wield it, it was excellent. I don’t know why I was forced to learn that but I needed it for something and stuck to it the entire time I used DOS for anything. And people were in awe when you used it while they watched. “What the hell was that!?”
Vim (and so many other editors, too) supported syntax highlighting for decades before TreeSitter even existed. Let’s not act as if this is a novel challenge.
I think not. Edit is to edit files in the terminal. What kind of files do you expect people to edit in the terminal? Most certainly files that would benefit from colors, not prose.
> “I’m 12 years late on this damn novel, and I’m struggling with it,” he said. “I have like 1,100 pages written, but I still have hundreds more pages to go. It’s a big mother of a book for whatever reason. Maybe I should’ve started writing smaller books when I began this, but it’s tough.”
He's averaging a hundred pages a year. Maybe not the fastest, but certainly not the slowest writer. With the size of his books... Cut the guy some slack.
There's an underlying assumption about "target audience for this editor" that you both share, that others, I suspect quite a few others, do not.
For starters, there's your assumption that there is "syntax" to be highlighted. Not every text file is something written in a computer programming language.
Micro is a great editor to replace stuff like nano. I think it would be a bad replacement for edit though, edit is very barebones, and micro is very "upgradeable" through lua. It also handles large files quite well also
Isn't the relatively large binary just because it's written in Golang? Go executables each ship their own copy of the Go runtime. That alone accounts for a big chunk of small programs like this.
Nano also links against ncurses, which is about as big as the compressed tarball for micro. I'm looking at the dependency closures of each right now in nix-tree[1], and micro's closure's total size is 15.04 MiB while nano's is 12.78 MiB-- not really "orders of magnitude" (as a sibling commenter suggests) when you look at it like that.
Admittedly, nano's dependencies (`file` and `ncurses`, on my system) are likely to ship as part of the "base system" of any Linux distro anyway; the real size it adds to any distro is negligible. But there's no indication to me that micro is meaningfully "bloated", as the meme goes; it seems like what is required to run it is reasonable and comparable to other tools that serve the same purpose.
Yes, couldn’t use on my router because of its size. No reason for a TUI to be so big. Advanced features outside of syntax highlighting not useful. Should have a light version.
There is also dte[1]. It hits exactly the same notch and offers an extremely lean editor with Unicode support, CUA key bindings and much more. It has replaced nano as my terminal editor.
as someone who uses CLI text editors frequently, but not often enough to build the muscle memory which remembers VI shortcuts, i really appreciate simple text editors.
i know that i can press like 3-4 arbitrary buttons to mark a block to move it to a different place - how about i just mark it with my cursor and CTRL-X CTRL-V, like every freaking other program out there.
i appreciate that i got VI on freshly installed or secured servers, but for things i use daily, i just want it to be KISS. already counting on people answering 'but vim is easy and simple'. opinions differ i guess.
I like vim a lot, and I use vim-style bindings wherever I can.
But before I learned to ride a bike, I used training wheels, and before I learned enough vim to enjoy using vim, I leaned on nano.
When someone is first learning to explore GNU/Linux, or even to dig into the Unix guts of macOS, they're learning a whole new world, not just a new text editor. For some people, strategic bridges to what they know (like CUA or Windows-like shortcuts) can make this process more fun and less fatiguing. Sometimes that difference is decisive in keeping someone motivated to learn and explore more.
Anyway, I think vim is worth learning (and maybe some of the quirks of old-school vi, if you expect to work on old or strange systems). It's not a matter of if I recommend that someone learn vim, but when. And until it's time for them to explore an editor deeply, micro seems like a great fit for most people.
I also want to say: as enthusiasts of Unix-like operating systems, or as professionals who appreciate some of their enduring strengths, should we really embrace a "because it's there" doctrine? Isn't that same kind of thinking responsible for huge, frustrating piles of mediocrity that we work with every day and resent?
ss someone who loves an ecosystem built first by volunteers as "just a hobby, nothing big and serious", I will it's sad, if not hypocritical, to dismiss software projects just because they aren't already dominant players. Most software I love was once marginal, something its users went to lengths to install on the systems they used because they enjoyed it more than the defaults. We should, to the extent practical, try to leave a little room for that in the way we approach computing— even as we get older and grumpier.
I wouldn't consider vi usability to be overall bad.
Sure, affordance ("is it easy grasp which moves i can make without affording much cognitive effort?") is terrible.
Setting up a decent environment is also a huge pain to get started with, but nowadays you can just hop into a prewarmed pool with premade setups like Normalvim or LunarVim.
But usability is not just "is it easy to learn", it's also "once i know it, how hard is it to use"
Once the moves are ingrained in your (muscle-)memory it becomes so incredibly efficient. di{, dat, yaf etc. are just the low hanging fruit, once you start with regex, macros and plugins the fun really begins.
vi isn’t usable. it sucks. but the facts are it’s installed everywhere and you can learn how to use it in 10-15 minutes. easier to patch your ignorance of basic vi than it is to install software on every machine you’ll ever edit on.
I learned vi a long time ago and use it when no other editor is at hand. In fact, I am using several editors simultaneously, depending on the task at hand and what is availabe. I stumbled over dte because I like to try out new things. And because dte hits many sweet spots for me, I installed it on machines where I often need a terminal editor. Binding myself to only one tool just because I learned to use it at some point in time is not my philosophy. Thankfully, the open source world offers so many alternatives and innovations, so that there is something for almost all tastes and habits. It comes with no costs besides building muscle memory to switch as needed and wanted.
You realize that you're asking this in a discussion of a tool that is intended to be installed out of the box on Microsoft Windows, where vi is not installed out of the box, right? Your "everywhere" doesn't include the primary use case for what is being headlined here.
Geniunely curious, how projects like these get approved in an org at the scale of Microsoft? Is this like a side project by some devs or part of some product roadmap? How did they convince the leadership to spend time on this?
As they explained, they needed a text editor that works in a command line (for Windows Core server installs), works across SSH (because for a while now Windows included an SSH Server so you can completely manage it through SSH), and can be used by non-vi-experienced Windows administrators (i.e. a modeless editor).
This way gets coolness points, HN headlines, makes the programmers who wrote it happy, and probably is a contribution to making a couple of autistic people feel included.
Rust + EDITOR.COM is kind of like remaking/remastering an old video game.
micro would have been an even better choice, the UX is impressively close to something like Sublime Text for a TUI, and very comfortable for those not used to modal editors.
I like micro and use it occasionally. I like this even more. I booted up the editor and instantly thought “it would be nice if there was a clickable buffer list right about…” and then realized my mouse was hovering over it. My next instant thought was that micro should have implemented this feature a long time ago
This is not a rewrite. Maybe it’s slightly inspired by the old thing, especially with having GUI-style clickable menus (something not seen often in terminal editors), but it’s much more modern.
It does seem "modern" in the sense that it is incredibly limited in functionality (EDIT.COM from DOS is much more full-featured) and deviates from well-established UI conventions.
CUA-style menubars aren't that uncommon in textmode editors. Midnight Commander's editor has traditional menubars with much more extensive functionality, as does jedsoft.org's Jed editor. Both of these also support mouse input on the TTY console via GPM, not just within a graphical terminal.
does nano support mouse usage? It doesn't seem to work for me (but maybe it just needs to be enabled somewhere)
I guess they thought that inheriting 25 years of C code was more trouble than designing a new editor from scratch. But you'd have to ask the devs why they decided to go down that route
Each group needs to do something and they come up with the ideas. Sometimes it is driven by various leaders, e.g. “use copilot”. Sometimes it is an idea from some hackerdayz event which gets expanded. Sometimes this is driven in research units where you have a bunch of technical people twiddling their thumbs. Sometimes this is an idea that goes through deep analysis and multiple semesters before it gets funding.
Look at the amount of contributors here. This project was probably some strategic investment. It did not come to existence overnight.
First of all, an empty list of dependencies! I am sold!
It works great. I can't believe the did a whole TUI just for this, with a dialogs a file browser. I want to use for a project of mine, I wonder how easy it is. If someone involve in the project is here, why not use Ratatui?
Code quality is top notch, can only say one thing:
Literally no deps except for a few dev-deps that make testing easier. That's a reasonable thing for something that you ship as a fundamental tool to be used by administrators as part of an OS like windows. Take a look for lhecker's [1] responses for more info on the not invented here stuff.
About a month ago I heard Microsoft had their own Linux distribution to help Microsoft Windows users feel more at home. From memory, it was a rather simple GNOME setup. Nothing special.
I am surprised Micrsooft didnt use the opportunity to create a micrsoft specific Linux distro that replaces bash with powershell, or Edit with vim, nano and other choices as well as .NET and Visual Studio Code by developer installs.
Micrsoft could have used this as their default WSL install.
It may not have won the war against typical distro like Ubuntu or Debian but it could have gained a percentage and be a common choice for Windows users - and there are a lot of Windows users!
Microsoft cannot dominate the Linux kernel but it can gain control in userland. Imagine if they gained traction with their applications being installed by default in popular distributions.
This Microsoft Edit is available for Linux, like Powershell is and others. If they had played their cards right -- perhaps -- 10 years ago, their distribution could have been in the top 5 today, all because many windows users use it as their WSL.
Giant companies (like M$) can inject their fingerprints into my personal space. Now, we just need Micrsooft Edit to have Co-Pilot on by default...
I strongly suspect in time Microsoft will move to Linux, at least with things like Windows Server and embedded Windows. Then a gradual change for Windows desktop, or a sort of Windows Legacy vs Windows "Linux Workstation" desktop options. Linux kernel + some sort of 'super' WINE and a fallback tightly integrated Windows classic on a VM for certain programs.
Only problem is that the NT kernel in many ways is much better than the Linux kernel design wise (for example, the NT kernel can handle a total GPU driver crash and restore itself, which I think Linux would really struggle with - same with a lot of other drivers).
But Windows is increasingly a liability not an asset for Microsoft, especially in the server space. Their main revenue stream is Azure & Office 365 which is growing at double digits still, with Windows license growth flat.
At a minimum I'd expect a Linux based version of Windows Server and some sort of Workstation version of Windows, based on Linux.
> I strongly suspect in time Microsoft will move to Linux, at least with things like Windows Server and embedded Windows.
You may not understand how important Microsoft considers backwards compatibility. Switching to a Linux kernel would eliminate all of that, and that is simply not an option for Microsoft.
The Linux kernel is missing a lot of esoteric things that the NT kernel has and that people use a lot, as well.
Windows as we use the word today (any variant) will not ever switch to a Linux kernel.
I do hope one day that Microsoft put a proper GUI on Linux though, no X, no Wayland, but something smarter and better than those. Probably also not likely to happen but I’d love to see it if they could do it well.
I think most userspace applications won't interact directly with the NT kernel, hence a project like Wine is at all viable (and sometimes provides better compatibility with older Windows applications than Windows).
The reason why WSL is a thing is because developers in corps needed a way to run Linux. IT support and techs doesn't know anything about Linux typically and don't want to deal with supporting it. WSL fixes this problem.
Most developers don't want to use Linux at all. Many developers don't even really know how to user a terminal and rely on GUI tools.
> Most developers don't want to use Linux at all. Many developers don't even really know how to user a terminal and rely on GUI tools.
First of all, I disagree with this comment.
However, lets assume you are right.. that the average "Windows Developer" has little to zero skills in GNU/Linux.
If that is the case, it proves my point EVEN MORE that Micrsofot missed out creating a Microsoft Linux Distro... designed to have Powershell, Visual Studio Code, Edit, and potentially Edge, SQL Server, etc.
It would still be Linux but keeping to what they know in Windows -- and would have given Microsoft more power in the linux world.
You can disagree all you want. It is simply the truth. I've contracted in the UK and Europe. Most devs don't even know you can tab complete most commands in modern shells (IIRC cmd.exe supports this). This is both Microsoft Shops and shops that use opensource stacks e.g. LAMP and similar.
I was in a large company in the NW and I knew two developers in a team of 30 that knew basic bash and vim.
There is a reason why "how I exit from vim" is a meme. Most people have no idea how to do it.
> If that is the case, it proves my point EVEN MORE that Micrsofot missed out creating a Microsoft Linux Distro... designed to have Powershell, Visual Studio Code, Edit, and potentially Edge, SQL Server, etc.
Respectfully you seem to have never worked with the people I describe. You listed PowerShell as if they would use it. A former colleague of mine was quizzed why he would use PowerShell to write a script that would run on a Windows Server. They had expected him to write a C# program.
> I was in a large company in the NW and I knew two developers in a team of 30 that knew basic bash and vim.
I have worked for various companies as well, UK, Netherlands, etc.
Yes, from my experience, working for jobs in a Windows environment (Windows development) will have less knowledge of bash or linux in general if they simply are not using it. These are developers using Windows, SQL Server, .NET, and other Microsoft-focused products.
I would agree that Windows developers have less skills with a shell, even CMD.. or much less Powershell. However, if we are going to FOCUS on this userbase, they are likely to be accepting to using a WSL Linux distro created by Microsoft bundled with powershell, .net, etc.. than to use Ubuntu with bash, vim/nano or variants.
Also, I have worked for Companies that focused on LAMP development and their linux skills were decent to pro. The only time someone would struggle is likely because their are junior level.. and coming from a Windows background.
> Respectfully you seem to have never worked with the people I describe. You listed PowerShell as if they would use it. A former colleague of mine was quizzed why he would use PowerShell to write a script that would run on a Windows Server. They had expected him to write a C# program.
Powershell... C#... both of which are Microsoft. Powershell is .NET under the hood. Doesn't change my comment.
> Most developers don't want to use Linux at all.
I don't know if this is necessarily true. Many of the develops I know prefer GUI applications to cli tooling, which I can get behind. That has nothing to do with Linux vs Windows though.
But my struggles with Windows are plentiful and the same goes for all my colleagues. I have a hard time believing that we are the outliers and not the rule.
> Sorry for the snarky comment, but then those devs are simply bad
Yes. That is the majority of developers. I had to explain to a dev today (nice enough guy) that he has to actually run the tests.
> Windows is legacy, the future is in open source.
You can claim the future is opensource but the industry has moved towards SAAS, PAAS, IAAS which is even more lock in than using a proprietary OS such as Windows.
So while you might have an opensource OS, many of the programs you use will be proprietary in the worst way possible.
You needn't use your real name, of course, but for HN to be a community, users need some identity for other users to relate to. Otherwise we may as well have no usernames and no community, and that would be a different kind of forum. https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...
Your snark at my comments is completely unwarranted.
I really shouldn't have to explain what follows. But I will.
Installing any dev tooling that is third party is done on the command line. Look up the instructions for installing Node LTS on Debian, or .NET, or Golang. You need to use the command line. Even on easier to use Distros they have the same procedure. Depending on the tooling you may need to set additional environment variables which are normally done in your .bashrc or similar.
What normally happens is people blindly copy and paste things into the terminal and don't read the documentation. This has been a problem on Linux since before Ubuntu was released. This isn't just limited to newbies either.
The state of GUIs BTW isn't great. Many of them look nice, and work reasonably well most of the time, *until they don't* e.g. If I double click a deb to install it, sometimes it will install. Other times it won't. So I don't even bother anymore and just use dpkg/apt. BTW it isn't any better with other distros. So I have to drop to the command line to fix the issue anyway.
So at some point you will need to learn bash, learn to read the man pages, and manually edit configuration file. It is unavoidable on Linux.
>About a month ago I heard Microsoft had their own Linux distribution to help Microsoft Windows users feel more at home. From memory, it was a rather simple GNOME setup. Nothing special.
You're confusing Microsoft's first-party Linux distro Azure Linux (nee CBL-Mariner) that is intended as a regular MS-supported OS for containers, VMs, servers, etc, with various Windows-like skins for Linux DEs that people have made for years.
> You think Microsoft maintains an entire secret distro just for Windows people to feel 'at home'.
Sorry I dont understand the point you are making.
I did not suggest they had a "secret distro"
I am suggesting they could have claimed a share of dominance in the Linux Distro as the default WSL distro.
I would venture a guess that the name recognition helps them. No developer wants to install a distro they’ve never heard of, but they do want to install Ubuntu. If WSL supports Ubuntu then they can cash in on that.
>Microsoft cannot dominate the Linux kernel but it can gain control in userland. Imagine if they gained traction with their applications being installed by default in popular distributions.
Yes, but how do they make money by doing this.
Unlike the socialist hiveminds that end up being behind the distros. Microsoft has salaries and bills to pay.
As far as I've always seen, everyone loves to leech on Microsoft's free stuff but nobody wants to pay for a product.
I do not claim to be a business expert but I dont think their success comes from just their Windows Operating System. Well, I would say the success for Windows is not about the profit but the control of users. If the majority are on Windows, they are unlikly to change habit to what they are familar with.
Besdies, for new PC/Laptops come bundled with Windows, Microsoft has made an agreement made with various retailers to come with Windows (Home edition) preinstalled. So in some ways, Windows is free for the User unless they pay for Professional edition, or whatever is offered today.
Of course, the average user will create a microsoft account to complete the install. :-)
Besides the Windows OS -- it is really the Services they provide.. Azure, Office365, SQL Server, PowerBI, etc. I would say THIS is where a lot of the money comes from... business willing to pay for them!
I work for Companies that are willing to PAY for these things - all for "Support"
If something goes wrong.. raise it with Microsoft. Even if I know what the problem is, it is all about the ticketing system. Throw it to Microsoft and carry on.
Despite the above, Microsoft also have "Free" software. They have started to Open Source many of their software.. allowing Linux support as well as Windows. Visual Studio Code, SQL Server, Powershell, etc.
It comes back to my point. When they presented WSL - they could have provided a "MS Linux" Distro, all promoted as "ease for Windows users" and if it became a popular distro, would have pushed micrsoft to have more control in userland... which would have alienated most Windows users away from Ubuntu, etc.
Like Windows, it is a method to keeping your userbase to rely on what they know overall.
I remember you could use it in a batch file to script some kinds of editing by piping the keypresses in from stdin. Sort of a replacement for a subset of sed or awk.
I haven't tried but this should be possible with vi too. Whether that is deeply cursed is another question.
This is just a "because I wanted to" project. And I get that; done a lot of those myself just to understand what the hell was going on. But the rewrite of turbo vision into FPC and compiling to half a dozen targets has been around for 20 years. Turbo vision is probably the best text mode windowing library in existence. The cool fun kicks in when you can map a whole text screen to an array like so:
var
Screen: Array[1..80,1..25] Of Byte Absolute $B800; // or something like that as i recall
What turbo vision brought to the game was movable, (non) modal windows. Basically a lot of rewriting that array in a loop. Pretty snappy. I made a shitload of money with that library.
You'll be surprised if I tell you several universities in India have not updated their curriculum in a very long time & Turbo C++ (& its non-standard C++ flavor) is the weapon of choice. The school board in the '00s, which preferred to teach a programming language for CS, used to have it curriculum around this C++ dialect. I have passed my high-school board examinations with this language (It was known to be already outdated in 2004. The smart kids knew the real C++ was programming by Visual Studio 6 ecosystem. But one had to still deal with it to clear the exams.)
Admitted, a few things have changed in last couple of years. MATLAB is being replaced by Python. Teaching 8085 & 8051 is being replaced by RasPi/Arduino. 8086 is taught alongside ARM & RISC, and not touted as SoTA.
I last saw Turbo being used in 2016-17 in a university setting, inside a DosBox (because Windows 7+ have dropped support for such old programs). Insane, but true.
Yeah, I also learned C++ via Turbo C++ in school in India in the early 2000s. Googling for "conio.h" shows Indians still talking about it in blogs and C/C++ forums as of 2024.
Nice. This editor could see a lot of use in such places if it gains developer-oriented features such as LSP, DAP and tree-sitter parsers. As a Rust-written editor, it will probably be quite a bit easier on resources than the usual modern choices which generally involve VSCode or Jetbeans plus language-specific plugins.
Core memory unlocked... When I was ~10-12, I asked my dad (who knew nothing but thought he knew everything about computers) how to make programs for Windows because I couldn't in QBasic. His answer was "with C++!". He came home with a book Learn C++ In 24 Hours that had Turbo C++ on a single 3.5" floppy disk. Naturally, that did not work, but I still had fun failing to compile every program I attempted to write.
Oh, a few years ago I wanted to write a simple program for dos. Since this is a Linux-only household otherwise, I was delighted to see OpenWatcom has a Linux port. I spent a good half hour trying to get a simple first version of the program I wanted to write running, but it always crashed right away. I simplified more and more until I basically arrived at hello world. On a hunch I ran the windows version of OpenWatcom with wine, and lo and behold, the program ran flawlessly! Once I googled that I found a couple of forum threads where people went like "yeah sure the Linux port produces broken binaries" because of course.
It's never the compiler until it's the compiler. Just didn't expect it during some simple fun coding at home. :)
Arrays in TP were laid out in row-major order, and each character was represented by two bytes, one denoting the character itself and the other the attributes (foreground/background color and blinking). So, even better, array[1..25, 1..80] of packed record ch: char; attr: byte end absolute $B800:0000.
Replace $B800 with $B000 for monochrome text display (mode 7), e.g., on the Hercules.
My first company out of uni was a company that sold a tv advertising application written in dos. It did all the reports, put together spot advert packages, measuring reach and frequency, cost per point, etc. Used Neilsen ratings for data. The company at the time paid commissions along with salary to programmers. The app still lives on in windows, but I've been out of that game for decades. Written in TP for dos, then Delphi for windows.
That was the toolchain that my company used. Turbo vision was a Borland product, back when Philippe Khan was running the company. We were that ahead of the curve for "shrinkwrapped software development" at the time. That legacy, Delphi and FPC still maintain the standard for desktop, native dev, really for the last 30 years.
I swear the characters appear on the screen before I press the keys when I'm in a real (not emulated) Linux terminal. I can't feel the lag as I'm typing this comment into a basic textarea, but it's clearly there, because a terminal feels magical.
Well, I don't have the rights to bundle anything with windows, nor would I want to. All you'd need is a thin player to reproduce a TUI screen if done in FPC, and it wouldn't be limited to Windows. All I'm suggesting is we tend to have some recency prejudice in our development, even when it costs more time/money than it should. I'm sure I've done the same over the years.
It's for people that want to use the Windows Terminal to edit files. The old `edit` command has been unsupported on Windows since 2006, so there was no Microsoft-provided editor that could be used in the command line since then.
Who's editing files big enough to benefit from 120GBps throughput in any meaningful way on the regular using an interactive editor rather than just pushing it through a script/tool/throwing it into ETL depending on the size and nature of the data?
I work exclusively outside of the windows ecosystem, so my usual go to would be to pipe this through some filtering tools in the CLI long before I crack them open with an editor.
"Better tools for the job" isn't always "the tool currently bringing in the $$$$$$$$$$$$$$$". So you live with it.
Sure, maybe by switching to linux you can squeeze out an extra CPU core's worth of performance after you fire your entire staff and replace them with Linux experienced developers, and then rewrite everything to work on Linux.
Or, live with it and make money money money money.
To turn this around, you can have fun and ask if something is meaningful or not outside the fun at the same time. If it is, great. If it's not, no harm.
I'm not saying that doing this can't be fun, or even good to learn off of, but when it's touted as a feature or a spec, I do have to ask if it's a legitimate point.
If you build the world's widest bike, that's cool, and I'm happy you had fun doing it, but it's probably not the most useful optimization goal for a bike.
Not a great analogy. This editor is really fast. Speed is important, to a point. But having more of it isn't going to hurt anything. It is super fun to write fast code though.
Not on the regular, but there are definitely times I load positively gigantic files in emacs for various reasons. In those times, emacs asks me if I want to enable "literal" mode. Don't think I'd do it in EDIT, though.
As a specific benchmark, no. But that wasn't the point of linking to the PR. Although the command looks like a basic editor, it is surprisingly featureful.
Fuzzy search, regular expression find & replace.
I wonder how much work is going to continue going into the new command? Will it get syntax highlighting (someone has already forked it and added Python syntax highlighting: https://github.com/gurneesh9/scriptly) and language server support? :)
Right, these are more useful features, IMO, than the ability to rip through 125GB of data every second. I can live without that, but syntax highlighting's a critical feature, and for some languages LSP support is a really big nice-to-have. I think both of those are, in this day and age, really legitimate first-class/built-in features. So are fuzzy searching and PCRE find&replace.
Add on a well-built plugin API, and this will be nominally competitive with the likes of vim and emacs.
Yeah ... I don't think there's any overlap between "users largely unfamiliar with terminals" who want something easy to use, and 'Linux users who are sufficiently technical that they would even hear about this repo'.
Here's a scenario. You're running a cluster, and your users are biologists producing large datasets. They need to run some very specific command line software to assemble genomes. They need to edit SLURM scripts over SSH. This is all far outside their comfort zone. You need to point them at a text editor, which one do you choose?
I've met biologists who enjoy the challenge of vim, but they are rare. nano does the job, but it's fugly. micro is a bit better, and my current recommendation. They are not perfect experiences out of the box. If Microsoft can make that out of the box experience better, something they are very good at, then more power to them. If you don't like Microsoft, make something similar.
> You're running a cluster, and your users are biologists producing large datasets. They need to run some very specific command line software to assemble genomes. They need to edit SLURM scripts over SSH. This is all far outside their comfort zone. You need to point them at a text editor, which one do you choose?
Wrongly phrased scenario. If you are running this cluster for the biologists, you should build a front end for them to "edit SLURM scripts", or you may find yourself looking for a new job.
> A Bioinformatics Engineer develops software, algorithms, and databases to analyze biological data.
You're an engineer, so why don't you engineer a solution?
The title is a bit confusing depending how you read it. Edit isn't "for" Linux any more than PowerShell was made for Linux to displace bash, zsh, fish, and so on. Both are just also available with binaries "for" Linux.
The previous HN posts which linked to the blog post explaining the tool's background and reason for existing on Windows cover it all a lot better than a random title pointing to the repo.
PowerShell lends itself really well to writing cross-platform shell scripts that run the same everywhere you can boot up PowerShell 7+. It's origins in .NET scripting mean that some higher-level idioms were already common in PowerShell script writing even before cross-platform existed, for instance using `$pathINeed = Join-Path $basePath ../sub-folder-name` will handle path separators smartly rather than just trying to string math it.
It's object-oriented approach is nice to work with and provides some nice tools that contrast well with the Unix "everything is text" tooling approach. Anything with a JSON output, for instance, is really lovely to work with `ConvertFrom-Json` as PowerShell objects. (Similar to what you can do with `jq`, but "shell native".) Similarly with `ConvertTo-Json` for anything that takes JSON input, you can build complex PowerShell object structures and then easily pass them as JSON. (I also sometimes use `ConvertTo-Json` for REPL debugging.)
It's also nice that shell script parameter/argument parsing is standardized in PowerShell. I think it makes it easier to start new scripts from scratch. There's a lot of bashisms you can copy and paste to start a bash script, but PowerShell gives you a lot of power out of the box including auto-shorthands and basic usage documentation "for free" with its built-in parameter binding support.
I dunno, I spent a lot of years (in high school at least) using Linux but being pretty overwhelmed by using something like vim (and having nobody around to point me to nano).
EDIT.COM, on the other hand... nice and straightforward in my book
There's no shortage of less technical people using nano for editing on Linux servers. Something even more approachable than that would have a user base.
Especially noting it's a single binary that's just 222kb on x86_64— that's an excellent candidate to become an "installed by default" thing on base systems. Vim and emacs are both far too large for that, and even vim-tiny is 1.3MB, while being considerably more hostile to a non-technical user than even vim is.
I can definitely see msedit having a useful place.
My guess would be there are some people at MS who, somehow, still can do something fun. Because they are not assigned on the another project on how to make OOBE even more miserable.
/rant Today I spent 3 (three) hours trying to setup a new MSI AIO with Windows Pro. Because even though it's would be joined to the local ADDS and managed from there - I need to join some Internet connected network, setup a 3 stupid recovery questions which would make NIST blush and wait another 30 minutes for a forced update download which I cannot skip. Oh, something went wrong - let's repeat the process 3 times.
Agree 100%, that's the biggest difference I feel whenever I'm doing remote Powershell vs. ssh, it always feels like a struggle just to make basic file modifications.
I was hoping this would work over ssh in a macOS Terminal.app, but last I tried it was inserting all kinds of weird characters into the edited text files.
Windows ships an official OpenSSH server these days, but so far there haven't been any good official text editors that work over OpenSSH, as far as I know.
I've had to resort to "copy con output.txt" the few times I needed to put things into a text file over windows-opensshd...
It's nice to see an editor that explicitly isn't an IDE and is more something like notepad.
often when editing config files and the such, it's more convenient to use something like notepad than it is to use something like VSCode.
This editor doesn't have delusions of grandeur, it focuses on usability more than features. and it is better for it.
If you are a idea guy and want your idea real , all you have to make is a fency fake trailer anouncing it, then watch the company create it, the suits are this empty anyone external can powersteer those ghostships.
And DEBUG. It's sad we no longer have a debugger/(dis)assembler/binary editor bundled with Windows. That thing was tiny but you could do so much with it.
The copy command is called "copy" which kind of makes sense? I remember once seeing a colleagues .bashrc with things like "alias copy=cp". Flags won't work the same way of course.
Sure "chcp" is a mouthful, but "del" or "erase" makes as much sense as learning that "rm" is short for remove. You pick up either convention quickly enough, except that I'm constantly using "where" when I meant "which". Maybe I should make an alias or something.
Don't get me started on powershell's look-we-can-use-proper-words-lets-see-how-long-we-can-make-this.
But there's no reason they anyone can't use generic naming for their products. Many software applications do and quite frankly its more descriptive to attracting new users than coming up with non-real names.
I would aruge the only reason made up names exist is to keep marketing departments employed trying to explain to users what they are needlessly.
Its pretty easy to build there, I've tried this on MacOS and Linux.
The one thing that vexed me for something based on edit, was CTRL+P being hijacked for something that isn't print, is like we forgot about about CUA over the last 15 years.
It will take more than nostalgia and rust to tear me away from my neovim setup that has been built up/improved on over the years. Lsp, dap, autocompletion, aliases and bindings for each programming languages. Lazily loaded of course so it’s still snappy.
Manage configuration, and external dependencies such as lsps with nix.
Then have separate nix shells for each project to load tooling and other dependencies in an isolated/repeatable session. Add in direnv to make it more seamless development experience.
You are not the target audience. This is aimed at casual users and beginners, and it's already in a good shape to replace nano with its user-friendly, mouse-enabled TUI.
I don't understand why they want to go with DLLs for scripting instead of WASM + wamr which is really small. Maybe I'm just really inexperienced in this space.
If the editor focused on becoming an IDE instead of being idiot-proof and cleanly designed, what would distinguish it from the 1,000 other editor projects?
Runs on Windows too! It has "Redo", not to be confused with "Undo Undo". Unfixed-width Tabs are a huge leap forward. LF, sans CR, will cut your file sizes in half.
Meanwhile, they forced AI Copilot bloat into Notepad, whose singular use-case was supposed to be that it does one thing well without unnecessary features.
Unfortunately, the new Edit isn't safe from such decisions.
While Satya might have made the change Microsoft <3 FOSS, the Gates/Balmer era was much better towards Windows developers.
Now we have a schizophrenia of Web and Desktop frameworks, and themselves hardly use them, what used to be a comfortable VS wizard, or plugin, now is e.g. a CLI tool that dumps an Excel file, showing that newer blood has hardly any Windows development culture, or their upper management.
I don't know how many people don't know this, but now you actually can't release app on Windows without it showing your warning while installing unless you sign it with EV certificates, which cost upwards of 500$ for a year.
As you may have guessed, this simply pushes out smaller devs. This used to NOT be like this. It should NOT be like this.
Unfortunately Apple normalised it, first with the iPhone. There are upsides (theoretically - less trash apps), but the review/curation process doesn't scale, and yep - the small devs are effectively told to bug off.
10 years ago I wanted to build a Love2D game, and release it for the three major OS's. The .love files are effectively ZIP archives, kinda like cartridges, but you need the correct Love2D version (they broke API compat every year or so). Windows and Mac used to be: "cat love.exe game.zip > game.exe".
Linux gave me the most crap, because making a portable, semi-static build was a nightmare; you couldn't rely on distros because each one shipped a different version of love.
Now Linux is actually becoming more viable, not because it's making that much progress, but because the two mainstream platforms are taking steps back.
Good. This might suck for opensource devs, but for normies that might get a random exe link this is good. I've gotten numerous phone calls from relatives when they try to run some unrecognized app, most of the time is benign, but on few occasions it was something malicious.
It's a heavy tax to protect the ignorant. I hear things like this and think how I've been using a computer for nearly 4 decades and it's never once happened to me. Maybe those types of people need to re-evaluate their technology choices (maybe iPad is more appropriate) instead of taxing the entire ecosystem to protect them from themselves.
low income countries don't have the money for iPads. My parents run on a 300 Euro computer bought 5 years ago. My dad is technical enough to get around a computer, but he's in his 60s now. My mom can open Facebook and youtube. Sometimes either of them downloads stuff, and opens them. So your solution is "make millions consumers spend $$ on overpriced hardware and even more closed off system, so few hundred open source devs don't spend 500$ to verify their app (which they will have to do if they want to release on the iOS platform either way)"
Ain't no way.
If you count the number of ignorant people who use Windows versus the people like you, you'll quickly realize the tax is very cheap for the level of protection it offers to the number of people it offers it to.
I'm so glad to hear that from someone unprompted. I tried WPF and it was a million times harder to use than WinForms, and I couldn't even be bothered to try out MAUI (although I accept it as an apology for WPF lol). I'm still using a WinForms application every day (Git Extensions) and have been able to contribute to it not least because it's the good old familiar WinForms.
This is not to say that WinForms isn't without its problems. I often wonder what it could be like if all the effort of making WPF and MAUI had gone into maintaining, modernizing and improving it.
I think that the native GUI development APIs provided by OS vendors need a kind of "headless" implementation first, where you can build UI in pure code like winforms, and then they should offer a framework on top of that. I, personally, hate XAML. It's stricter than HTML/CSS and very opinionated about how to organize your application. I feel that XAML frameworks should have a common Winforms-like API behind them that you can switch to any time you want. But I've found that using the C# code-behind APIs manually for WPF, UWP, MAUI, etc, is far more verbose than Winforms was.
My only major problem with winforms is that it's still using GDI under the hood which, despite what many people believe, is actually still primarily software-rendered. If they could just swap out Winforms for Direct2D under the hood (or at least allow a client hint at startup to say "prefer Direct2D") it would really bring new life to Winforms, I think.
I would also like a C++ native GUI API that's more modern than MFC
"C# Markup" [1] [2] sounds a lot like what you are looking for. As the only "second party" option in this space it's interesting that it is so MAUI only/MAUI focused, but I suppose that's the "new hotness".
There have been similar F# libraries and third-party C# libraries for a while that seem nice to work with in similar ways.
Unfortunately that is something Microsoft seems incapable of.
MFC was already relatively bad versus OWL. Borland[0] kept improving it with VCL and nowadays FireMonkey.
There there is Qt as well.
Microsoft instead came up with ATL, and when they finally had something that could rival C++ Builder, with C++/CX, a small group managed to replace it with C++/WinRT because they didn't like extensions, the irony.
With complete lack of respect for paying customers, as C++/WinRT never ever had the same Visual Studio tooling experience as C++/CX.
Nowadays it is in maintenance, stuck in C++17, working just good enough for WinUI 3.0 and WinAppSDK implementation work, and the riot group is having fun with Rust's Windows bindings.
So don't expect anything good coming from Microsoft in regards to modern C++ GUI frameworks.
[0] - Yes nowadays others are at the steering wheel.
Borland was pretty good on the GUI front, I think we're forgetting how easy it was to get something rolling in Delphi. It's baffling Microsoft still hasn't gotten their stuff together on this. They've been just releasing new frameworks since the WinRT era and hoping something sticks.
Firstly, that nobody believes them when they swear that {new GUI framework} will be the future and used for everything. Really. Because this time is not like those other times.
Secondly, pre-release user feedback. Ironic, given other parts of Microsoft do feedback well.
Imho, the only way MS is going to truly displace WinForms at this point is to launch a 5-year project, developed in the open, and guided in part by their community instead of internally.
And toss a sweetener in, like free app signing or something.
Agreed it is the easiest, however it is also possible to use WPF on the same style as Forms, with more features, no need to go crazy with MVVM, stay with plain code behind.
Having said this, from 3rd parties, Avalonia is probably the best option.
While I think Uno is great as well, they lose a bit by betting on WinUI as foundation on Windows, and that has been only disappointment after disappointment since Project Reunion.
We spent the better part of a calendar year researching what framework to update our MFC app to. We really liked the idea of staying first-party since our UI is explicitly Windows-only, and we looked at every framework - MAUI, winforms or WPF with a C# layer, WinUI3...
It quickly became apparent that WinUI3 was the only one even close to viable for our use case, and we tried getting a basic prototype running with out legacy backend code. We kept running into dealbreakers we hoped would be addressed in the alleged future releases, like the lack of tables, or the baffling lack of a GUI UI designer (like every other previous Win framework).
The new Edit.exe is indeed safe from those things.
A requirement for the tool is that it must remain as small as possible, so that it can be included in the smallest distributions of Windows, like Nano Server. It is the rescue text editor there.
I’m sure plugins are going to do all the things that everyone doesn’t want (or does want) but the default edit.exe will remain small, I’d bet money on it.
I took a screenshot and pasted it into the new Win11 Paint. Even minimized, Paint was constantly using 5% CPU and sitting at ~250MB of RAM. I guess I can begrudgingly get over the RAM, but squandering the CPU like that is ridiculous.
What happened to pride or quality control or anything?
I had to open Notepad and see it for myself. Wow! I see the Icon.
I remember Co-pilot just suddenly appearing in my taskbar and finding it annoying. Despite removing it, I still see it lurking around... and now I see it is a SIMPLE TEXT EDITING PROGRAM named Notepad.
Every product has bizarre bloat. I understand things might get heavier over time with new features, but Office from like 20 years ago still works pretty great. In fact, I don’t even really see any new features that are missing in my normal use case. Actually, anything that DOES exist in a newer version is something I actively DO NOT want. For example, monthly/yearly subscriptions, popups that interrupt typing to advertise some new bloat, and dedicated buttons to import any file into a powerpoint presentation or email.
Look at Outlook. Literally less than 25% of the screen appears to be dedicated to email content. I say literally because I physically measured it and from what I remember it was 18% to 20%. Microsoft keeps adding these gigantic toolbars that each have duplicate buttons that often can’t really be adjusted, removed, or hidden. Or it may be an all-or-nothing scenario where something can be removed but then you can’t e.g. send emails.
Rather than fixing the problem, the solution is to add a new toolbar. This frequently keeps happening. Just one more toolbar with a select subset of buttons in one place so people can find it. Well now… We have some extra whitespace… Let’s throw in the weather there and why not put the news in too. What could possibly go wrong?
And then loading the news, some totally unrelated and non-critical feature they shove in forcefully by default frequently has at least one critical severe bug where there’s an async fetch process that spikes the cpu to max and crashes the whole system. There’s no way to disable news without first loading outlook and going into advanced settings, which of course is past the critical point of the news being loaded.
Go look at like Outlook 2003. It is nearly perfect. It’s clean, simple, and there’s no distractions. This is so amazing, like many Microsoft products that seem to be built by engineers, but I don’t know how we get to modern outlook that feels like it has 10 to 50 separate project manager teams bloating it up often with duplicate functionality.
This would be bad enough, but then again instead of fixing it like I said before or fixing it by reducing or consolidating teams or product work, we get ANOTHER layer of Microsoft bloat by having multiple versions of the same product. So we have Outlook (legacy) named that way to make you feel bad for using an old version, or named to scare you into believing it won’t be supported. Then there’s Outlook (New). Then there’s Outlook (Classic) which isn’t legacy or new but is a weird mix of things. Then there’s a web version that they try to force everybody into because it’s literally perfect and there’s no reason not to use it… Somehow they didn’t catch that emails don’t load in folders unless you click into them, or sorting rules don’t work the same or don’t support all the same conditions. Rather than fixing it, you get attacked for using edge case frivilous advanced obscure functionality. Like who would want to have emails pre-sorted into any folder except inbox? Shame on you for using email wrong I guess.
I’ll skip over the part where there’s multiple versions of the multiple forks of outlook. But there’s also Government, Education, Student, Trial, Free, Standard, Pro, Business, Business pro, Business premium, etc.
The last infuriating point in my rant has to come down to their naming standards. For some reason they keep renaming something old to a completely new name and of all the names they could pick, it’s not only something that already exists but it’s another Microsoft product. This is a nightmare trying to explain to somebody who is only familiar or aware of either the old or the new name and this confusion is often mixed even on a technically capable and competent team. For bonus points, the name has to be something generic. Even like “Windows” which is not a great example because the operating system is so popular but you can imagine similarly named things causing search confusion. Or even imagine trying to search for the GUI box thing that displays files in a folder within the operating system, also called a window, and try to imagine debugging an obscure technical problem about that while getting relevant information.
There’s so many Microsoft moments that things like adding AI to notepad hardly phase me anymore. I don’t like that they do that but I wouldn’t necessarily be so offended if their own description they came up with in the first place was what you mentioned. Constantly going against their own information they invented themselves and chose to state as a core statement just irritates me.
> The last infuriating point in my rant has to come down to their naming standards. For some reason they keep renaming something old to a completely new name and of all the names they could pick, it’s not only something that already exists but it’s another Microsoft product.
Microsoft has seemingly sucked at naming things since at least the mid-90s. It's effectively un-search-engine-able, but I recall that in the anti-trust action in the mid-90s a Microsoft person was trying to answer questions about "Internet Explorer" versus "Explorer" (as-in "Windows Explorer", as in the shell UI) and it was a confusing jumble. Their answers kept coming back to calling things "an explorer". It made very little sense. Years later, and after much exposure to Microsoft products, it occurred to me that "explorer" was an early 90s Microsoft-ism for "thing that lets you browse thru collections of stuff" (much like "wizards" being step-by-step guided processes to operate a program).
It'd be nice if they didn't recommend winget for installation though. winget is an egregious security risk that Microsoft has just like pretended follows even minimal security practices, despite just launching four years ago with no protection from bad actors whatsoever and then never implementing any improvements since.
disclaimer: I used to commit to winget a lot and now I don’t.
…but is it really less secure than brew or choco? The installers are coming from reasonably trusted sources and are scanned for malware by MS, a community contributor has to approve the manifest changes, and the manifests themselves can’t contain arbitrary code outside of the linked executable. Feels about as good as you can get without requiring the ISVs themselves to maintain repos.
The installers are coming from random people on the Internet. Most software repositories have trusted contributors and a policy of requiring a piece of software be arguably worthy of inclusion. Perhaps because Microsoft is afraid to pick winners, every piece of garbage is allowed on winget, and there's no way to restrict who can make changes to what packages.
There are ISVs that would like to lock down their software so they can maintain it but a trillion dollar company couldn't spare a dollar to figure out a "business process" to do this. As far as I know, Microsoft has a single employee involved who has laughed off any security concerns with "well the automated malware scanner would find it".
The "community contributors" were just... people active on GitHub when they launched it. Was anyone vetted in any way? No.
The Microsoft Store has actual app reviewers, winget has... "eh, lgtm".
The policy of including the author's name next to the project name, along with some indication that it really is the author and not an imposter, I think that's probably the best we're ever going to get, since at that point it just comes down to community trust.
Except curl | bash definitely executes code by the author controlling the URL you put in, and if the URL is HTTPS, in a reasonably secure fashion.
There is no validation when you winget whether or not the executable is from the official source or that a third party contributor didn't tamper with how it's maintained.
There is 0 validation that the script that you are piping into bash is the script that you expect. Even just validating the command by copying and pasting the URL in a browser -- or using curl and piping into more/less is not enough to protect you.
> you both seem to be saying that you get whatever the server sends
Yes, but I am also saying that you can't verify that the script that is run on one machine with a pipe is the same script that runs on a second machine with a pipe.
The key part of the original statement is the server can choose to send different scripts based on different factors. A curl&bash script on machine 1 does not necessarily mean the same curl&bash script will be run on machine 2.
The tooling provided by a `curl | bash` pipeline provides no security at all.
With winget, there is at least tooling to be able to see that the same file (with the same hash) will be downloaded and installed.
There are ways to do this better, for example, check out https://hashbang.sh. It includes a GPG signature that is verified against the install script, before it is passed to curl.
The parent is talking about MITM, which is prevented with TLS and curl but not winget. They are saying curl is strictly better, not that it is impenetrable. If you trust the domain owner, you can trust curl | bash, but you can't trust winget
It's easy enough to view the manifests (eg, https://github.com/microsoft/winget-pkgs/blob/2ecf2187ea0bf1...) and arguably, is better then the protection for MITM that you would get using naked cURL & Bash, simply because there are file hashes for all of the installer files provided by a third party.
> They are saying curl is strictly better, not that it is impenetrable
Right. But it arguably is not strictly better.
> You can't trust winget
Again, this is not backed up by anything. I have trust in winget. I can trust that the manifest has at least been vetted by a human, and that the application that will be installed should be the one that I requested. I can not trust that this will happen with curl | bash. If the application that is installed is not the one that I requested, there is tooling a process to sort out why that did not happen, and a way to flag it so that it doesn't happen to other users. I don't have this with curl | bash.
If you think HTTPS is performing code validation I have news for you.
HTTPS only guarantees the packets containing the unverified malicious code are not tampered with from the server to you. A server which could very well be compromised and alternate code put in its place.
You are drawing an egregious apples-to-oranges comparison here. Please re-read what you said.
You could serve digitally signed code over plain HTTP and it would be more secure than your example over HTTPS. Unfortunately there are a lot of HTTPS old wives' tales that many misinformed developers believe in.
curl | bash is absolutely on my very short list of “things I’ll never do” and I wince when I see it. rm -rf starting from / is another. I watched someone type in (as root) “rm -rf / home/user/folder” once. By the time I realized what had happened it was too late.
So much excitement that this got posted 3 times in a week
1. By the author - https://news.ycombinator.com/item?id=44034961 2. Ubuntu Publication - https://news.ycombinator.com/item?id=44306892
And this post.
There is also this: https://news.ycombinator.com/item?id=44341462
The original edit.com, from around dos 6.22 (and later 7.0, ie. win95) was my first IDE. Well, I started with qbasic, so I was fairly familiar with it as it was similar (or same?), but when I started learning C/C++ with djgpp, I just continued using edit.com.
My "project file" was `e.bat` with `edit file1.cpp file2.cpp file3.cpp`, as it was one of the few editors that I knew that had a decent multi file support with easy switching (alt-1,2,3 ..). I still continue remapping editor keybindings to switch to files with alt/cmd-1,2,3,.. and try to have my "active set" as few of the first files in the editor
It wasn't a great code editor, as it didn't have syntax highlighting, and the indent behaviour wasn't super great (which is why in my early career had my indent was two spaces, as that was easy enough to do by hand, and wasn't too much like tab). But I felt very immediate with the code anyway.
I knew that many others used editors like `qedit`, but somehow they never clicked with me. The unixy editors didn't feel right in dos either.
Quickly trying this, it doesn't seem to switch buffers with the same keybindings, even if it does seem to support multiple buffers.
You should raise that as an issue. If things like that get in early enough, they get heard.
And it wasn't just similar. It was literally the same. EDIT.COM simply started QBASIC up with a special flag. One could just run QBASIC with the flag. As I said at https://news.ycombinator.com/item?id=44037509 , I actually did, just for kicks.
It may not have had syntax highlighting, but it did have syntax capitalization (for lack of a better term?). If you typed a line in all lowercase, after hitting enter it would automatically uppercase the reserved words. It wasn't much, but it helped
edit was a godsend after the `copy con` days
I remember using edlin a lot in my early computing days. It was murder to learn but once you knew how to wield it, it was excellent. I don’t know why I was forced to learn that but I needed it for something and stuck to it the entire time I used DOS for anything. And people were in awe when you used it while they watched. “What the hell was that!?”
I used to recommend micro[1] to people like those in the target audience of this editor. I wonder if that should change or not.
--
1: https://micro-editor.github.io/
IMO it should not.
`edit` doesn't even support syntax highlighting (atleast, out of the box when I tried it).
It might in the future, though, as the main developer has opened an issue about it: https://github.com/microsoft/edit/issues/18
The trick is doing it while keeping the binary size small, so tree sitter is not an option.
Vim (and so many other editors, too) supported syntax highlighting for decades before TreeSitter even existed. Let’s not act as if this is a novel challenge.
I think you missed the point of edit.
I think not. Edit is to edit files in the terminal. What kind of files do you expect people to edit in the terminal? Most certainly files that would benefit from colors, not prose.
I met someone recently that still uses WordStar. Yes I'm serious. He runs it in QEMU on FreeDOS. He's a writer for a living.
George R.R. Martin still uses Wordstar as well!
Sure, there are always exceptions but they're only that, exceptions.
A lot of authors, myself included, want a "distraction free" editor. Its a whole over-populated market segment.
Prose thrives in the terminal. Ice and Fire was written in WordStar, as just one popular example.
Rachel Kroll once posted that she writes her posts in nano, with the distraction of syntax highlighting turned off: https://rachelbythebay.com/w/2011/09/24/editor/
I can't find the link but I think at some point she compiled her own nano with some "helpful" feature patched out again.
Is that why the series ground to a halt
> “I’m 12 years late on this damn novel, and I’m struggling with it,” he said. “I have like 1,100 pages written, but I still have hundreds more pages to go. It’s a big mother of a book for whatever reason. Maybe I should’ve started writing smaller books when I began this, but it’s tough.”
He's averaging a hundred pages a year. Maybe not the fastest, but certainly not the slowest writer. With the size of his books... Cut the guy some slack.
I think you missed the question I was answering in my comment.
There's an underlying assumption about "target audience for this editor" that you both share, that others, I suspect quite a few others, do not.
For starters, there's your assumption that there is "syntax" to be highlighted. Not every text file is something written in a computer programming language.
You're right, I do assume most (90+%? of) people that are looking for a terminal editor are likely developers.
In fact I'd put money on it, but sadly do not have any evidence to back it up.
If you have evidence to the contrary I'd be intrigued!
Micro is a great editor to replace stuff like nano. I think it would be a bad replacement for edit though, edit is very barebones, and micro is very "upgradeable" through lua. It also handles large files quite well also
Last time I checked, micro should have been called macro based on the binary file size.
Isn't the relatively large binary just because it's written in Golang? Go executables each ship their own copy of the Go runtime. That alone accounts for a big chunk of small programs like this.
Nano also links against ncurses, which is about as big as the compressed tarball for micro. I'm looking at the dependency closures of each right now in nix-tree[1], and micro's closure's total size is 15.04 MiB while nano's is 12.78 MiB-- not really "orders of magnitude" (as a sibling commenter suggests) when you look at it like that.
Admittedly, nano's dependencies (`file` and `ncurses`, on my system) are likely to ship as part of the "base system" of any Linux distro anyway; the real size it adds to any distro is negligible. But there's no indication to me that micro is meaningfully "bloated", as the meme goes; it seems like what is required to run it is reasonable and comparable to other tools that serve the same purpose.
--
1: See: https://github.com/utdemir/nix-tree ; Try `nix run nixpkgs#nix-tree -- $(nix build --no-link --json nixpkgs#nano | jq -r .[0].outputs.out)`
The reason may help understand why but it is not really interesting from the point of view of the user or those packaging it with their OS.
Well, it is orders of magnitude larger than nano, so...
Seriously? We're going to complain about a couple megs in a text editor in the year 2025?
Like my mother say. Small stream makes huge rivers.
Not caring about a couple of megs here and there is what makes some modern systems so bloated.
Yes, couldn’t use on my router because of its size. No reason for a TUI to be so big. Advanced features outside of syntax highlighting not useful. Should have a light version.
I installed nano with CUA keybindings instead.
There is also dte[1]. It hits exactly the same notch and offers an extremely lean editor with Unicode support, CUA key bindings and much more. It has replaced nano as my terminal editor.
[1]: https://craigbarnes.gitlab.io/dte/
I'd recommend everyone to take a look and read some of dte's source code. It's a great example of beautifully written modern C code.
Why are you opposed to learning vi which is already installed everywhere?
as someone who uses CLI text editors frequently, but not often enough to build the muscle memory which remembers VI shortcuts, i really appreciate simple text editors.
i know that i can press like 3-4 arbitrary buttons to mark a block to move it to a different place - how about i just mark it with my cursor and CTRL-X CTRL-V, like every freaking other program out there.
i appreciate that i got VI on freshly installed or secured servers, but for things i use daily, i just want it to be KISS. already counting on people answering 'but vim is easy and simple'. opinions differ i guess.
https://www.youtube.com/watch?v=9n1dtmzqnCU
I like vim a lot, and I use vim-style bindings wherever I can.
But before I learned to ride a bike, I used training wheels, and before I learned enough vim to enjoy using vim, I leaned on nano.
When someone is first learning to explore GNU/Linux, or even to dig into the Unix guts of macOS, they're learning a whole new world, not just a new text editor. For some people, strategic bridges to what they know (like CUA or Windows-like shortcuts) can make this process more fun and less fatiguing. Sometimes that difference is decisive in keeping someone motivated to learn and explore more.
Anyway, I think vim is worth learning (and maybe some of the quirks of old-school vi, if you expect to work on old or strange systems). It's not a matter of if I recommend that someone learn vim, but when. And until it's time for them to explore an editor deeply, micro seems like a great fit for most people.
I also want to say: as enthusiasts of Unix-like operating systems, or as professionals who appreciate some of their enduring strengths, should we really embrace a "because it's there" doctrine? Isn't that same kind of thinking responsible for huge, frustrating piles of mediocrity that we work with every day and resent?
ss someone who loves an ecosystem built first by volunteers as "just a hobby, nothing big and serious", I will it's sad, if not hypocritical, to dismiss software projects just because they aren't already dominant players. Most software I love was once marginal, something its users went to lengths to install on the systems they used because they enjoyed it more than the defaults. We should, to the extent practical, try to leave a little room for that in the way we approach computing— even as we get older and grumpier.
Because vi has all the usability of a keyboard made out of hedgehogs.
I wouldn't consider vi usability to be overall bad. Sure, affordance ("is it easy grasp which moves i can make without affording much cognitive effort?") is terrible.
Setting up a decent environment is also a huge pain to get started with, but nowadays you can just hop into a prewarmed pool with premade setups like Normalvim or LunarVim.
But usability is not just "is it easy to learn", it's also "once i know it, how hard is it to use"
Once the moves are ingrained in your (muscle-)memory it becomes so incredibly efficient. di{, dat, yaf etc. are just the low hanging fruit, once you start with regex, macros and plugins the fun really begins.
vi isn’t usable. it sucks. but the facts are it’s installed everywhere and you can learn how to use it in 10-15 minutes. easier to patch your ignorance of basic vi than it is to install software on every machine you’ll ever edit on.
I learned vi a long time ago and use it when no other editor is at hand. In fact, I am using several editors simultaneously, depending on the task at hand and what is availabe. I stumbled over dte because I like to try out new things. And because dte hits many sweet spots for me, I installed it on machines where I often need a terminal editor. Binding myself to only one tool just because I learned to use it at some point in time is not my philosophy. Thankfully, the open source world offers so many alternatives and innovations, so that there is something for almost all tastes and habits. It comes with no costs besides building muscle memory to switch as needed and wanted.
You realize that you're asking this in a discussion of a tool that is intended to be installed out of the box on Microsoft Windows, where vi is not installed out of the box, right? Your "everywhere" doesn't include the primary use case for what is being headlined here.
Learning CUA once is more realistic/convenient for most people.
And you can get it on Windows with just "winget install zyedidia.micro". Reminds me of 8 & 16-bit editors of a similar era.
Geniunely curious, how projects like these get approved in an org at the scale of Microsoft? Is this like a side project by some devs or part of some product roadmap? How did they convince the leadership to spend time on this?
A text editor is an obvious target for copilot integration.
As they explained, they needed a text editor that works in a command line (for Windows Core server installs), works across SSH (because for a while now Windows included an SSH Server so you can completely manage it through SSH), and can be used by non-vi-experienced Windows administrators (i.e. a modeless editor).
Telling people to use nano would of course have been next to impossible. Much easier to rewrite a DOS-era editor in Rust, naturally.
This way gets coolness points, HN headlines, makes the programmers who wrote it happy, and probably is a contribution to making a couple of autistic people feel included.
Rust + EDITOR.COM is kind of like remaking/remastering an old video game.
micro would have been an even better choice, the UX is impressively close to something like Sublime Text for a TUI, and very comfortable for those not used to modal editors.
This is the first time I've heard of micro. More info here: https://micro-editor.github.io/
It doesn’t have a menu for windows devs, and is supposed to be small and light. Two strikes against.
I like micro and use it occasionally. I like this even more. I booted up the editor and instantly thought “it would be nice if there was a clickable buffer list right about…” and then realized my mouse was hovering over it. My next instant thought was that micro should have implemented this feature a long time ago
The developer actually explained, on Hacker News just over a month ago, some of the engineering choices that ruled out nano.
* https://news.ycombinator.com/item?id=44034961
> rewrite
This is not a rewrite. Maybe it’s slightly inspired by the old thing, especially with having GUI-style clickable menus (something not seen often in terminal editors), but it’s much more modern.
It does seem "modern" in the sense that it is incredibly limited in functionality (EDIT.COM from DOS is much more full-featured) and deviates from well-established UI conventions.
CUA-style menubars aren't that uncommon in textmode editors. Midnight Commander's editor has traditional menubars with much more extensive functionality, as does jedsoft.org's Jed editor. Both of these also support mouse input on the TTY console via GPM, not just within a graphical terminal.
I still see it as rewrite even if you only use the original as inspiration. But that's just semantics
If they hadn’t called it “edit” you wouldn’t have thought of it as a rewrite.
It's no semantics. It's just a lie
nano's great but the shortcuts are a bit oddball, from the perspective of a Windows guy.
does nano support mouse usage? It doesn't seem to work for me (but maybe it just needs to be enabled somewhere)
I guess they thought that inheriting 25 years of C code was more trouble than designing a new editor from scratch. But you'd have to ask the devs why they decided to go down that route
> does nano support mouse usage?
Yes, but you have to put `set mouse` into your nanorc.
Each group needs to do something and they come up with the ideas. Sometimes it is driven by various leaders, e.g. “use copilot”. Sometimes it is an idea from some hackerdayz event which gets expanded. Sometimes this is driven in research units where you have a bunch of technical people twiddling their thumbs. Sometimes this is an idea that goes through deep analysis and multiple semesters before it gets funding.
Look at the amount of contributors here. This project was probably some strategic investment. It did not come to existence overnight.
So many things I like about this!
First of all, an empty list of dependencies! I am sold! It works great. I can't believe the did a whole TUI just for this, with a dialogs a file browser. I want to use for a project of mine, I wonder how easy it is. If someone involve in the project is here, why not use Ratatui?
Code quality is top notch, can only say one thing:
Bravo!
Literally no deps except for a few dev-deps that make testing easier. That's a reasonable thing for something that you ship as a fundamental tool to be used by administrators as part of an OS like windows. Take a look for lhecker's [1] responses for more info on the not invented here stuff.
[1]: https://news.ycombinator.com/threads?id=lhecker
About a month ago I heard Microsoft had their own Linux distribution to help Microsoft Windows users feel more at home. From memory, it was a rather simple GNOME setup. Nothing special.
I am surprised Micrsooft didnt use the opportunity to create a micrsoft specific Linux distro that replaces bash with powershell, or Edit with vim, nano and other choices as well as .NET and Visual Studio Code by developer installs.
Micrsoft could have used this as their default WSL install.
It may not have won the war against typical distro like Ubuntu or Debian but it could have gained a percentage and be a common choice for Windows users - and there are a lot of Windows users!
Microsoft cannot dominate the Linux kernel but it can gain control in userland. Imagine if they gained traction with their applications being installed by default in popular distributions.
This Microsoft Edit is available for Linux, like Powershell is and others. If they had played their cards right -- perhaps -- 10 years ago, their distribution could have been in the top 5 today, all because many windows users use it as their WSL.
Giant companies (like M$) can inject their fingerprints into my personal space. Now, we just need Micrsooft Edit to have Co-Pilot on by default...
I strongly suspect in time Microsoft will move to Linux, at least with things like Windows Server and embedded Windows. Then a gradual change for Windows desktop, or a sort of Windows Legacy vs Windows "Linux Workstation" desktop options. Linux kernel + some sort of 'super' WINE and a fallback tightly integrated Windows classic on a VM for certain programs.
Only problem is that the NT kernel in many ways is much better than the Linux kernel design wise (for example, the NT kernel can handle a total GPU driver crash and restore itself, which I think Linux would really struggle with - same with a lot of other drivers).
But Windows is increasingly a liability not an asset for Microsoft, especially in the server space. Their main revenue stream is Azure & Office 365 which is growing at double digits still, with Windows license growth flat.
At a minimum I'd expect a Linux based version of Windows Server and some sort of Workstation version of Windows, based on Linux.
> I strongly suspect in time Microsoft will move to Linux, at least with things like Windows Server and embedded Windows.
You may not understand how important Microsoft considers backwards compatibility. Switching to a Linux kernel would eliminate all of that, and that is simply not an option for Microsoft.
The Linux kernel is missing a lot of esoteric things that the NT kernel has and that people use a lot, as well.
Windows as we use the word today (any variant) will not ever switch to a Linux kernel.
I do hope one day that Microsoft put a proper GUI on Linux though, no X, no Wayland, but something smarter and better than those. Probably also not likely to happen but I’d love to see it if they could do it well.
I think most userspace applications won't interact directly with the NT kernel, hence a project like Wine is at all viable (and sometimes provides better compatibility with older Windows applications than Windows).
> I do hope one day that Microsoft put a proper GUI on Linux though, no X, no Wayland, but something smarter and better than those.
https://xkcd.com/927/
Now that I say that, though, that does sound like a Microsoft kind of move. They do love other platforms "to death."
[flagged]
The reason why WSL is a thing is because developers in corps needed a way to run Linux. IT support and techs doesn't know anything about Linux typically and don't want to deal with supporting it. WSL fixes this problem.
Most developers don't want to use Linux at all. Many developers don't even really know how to user a terminal and rely on GUI tools.
> Most developers don't want to use Linux at all. Many developers don't even really know how to user a terminal and rely on GUI tools.
First of all, I disagree with this comment.
However, lets assume you are right.. that the average "Windows Developer" has little to zero skills in GNU/Linux.
If that is the case, it proves my point EVEN MORE that Micrsofot missed out creating a Microsoft Linux Distro... designed to have Powershell, Visual Studio Code, Edit, and potentially Edge, SQL Server, etc.
It would still be Linux but keeping to what they know in Windows -- and would have given Microsoft more power in the linux world.
> First of all, I disagree with this comment.
You can disagree all you want. It is simply the truth. I've contracted in the UK and Europe. Most devs don't even know you can tab complete most commands in modern shells (IIRC cmd.exe supports this). This is both Microsoft Shops and shops that use opensource stacks e.g. LAMP and similar.
I was in a large company in the NW and I knew two developers in a team of 30 that knew basic bash and vim.
There is a reason why "how I exit from vim" is a meme. Most people have no idea how to do it.
> If that is the case, it proves my point EVEN MORE that Micrsofot missed out creating a Microsoft Linux Distro... designed to have Powershell, Visual Studio Code, Edit, and potentially Edge, SQL Server, etc.
Respectfully you seem to have never worked with the people I describe. You listed PowerShell as if they would use it. A former colleague of mine was quizzed why he would use PowerShell to write a script that would run on a Windows Server. They had expected him to write a C# program.
> I was in a large company in the NW and I knew two developers in a team of 30 that knew basic bash and vim.
I have worked for various companies as well, UK, Netherlands, etc. Yes, from my experience, working for jobs in a Windows environment (Windows development) will have less knowledge of bash or linux in general if they simply are not using it. These are developers using Windows, SQL Server, .NET, and other Microsoft-focused products.
I would agree that Windows developers have less skills with a shell, even CMD.. or much less Powershell. However, if we are going to FOCUS on this userbase, they are likely to be accepting to using a WSL Linux distro created by Microsoft bundled with powershell, .net, etc.. than to use Ubuntu with bash, vim/nano or variants.
Also, I have worked for Companies that focused on LAMP development and their linux skills were decent to pro. The only time someone would struggle is likely because their are junior level.. and coming from a Windows background.
> Respectfully you seem to have never worked with the people I describe. You listed PowerShell as if they would use it. A former colleague of mine was quizzed why he would use PowerShell to write a script that would run on a Windows Server. They had expected him to write a C# program.
Powershell... C#... both of which are Microsoft. Powershell is .NET under the hood. Doesn't change my comment.
> Most developers don't want to use Linux at all. I don't know if this is necessarily true. Many of the develops I know prefer GUI applications to cli tooling, which I can get behind. That has nothing to do with Linux vs Windows though. But my struggles with Windows are plentiful and the same goes for all my colleagues. I have a hard time believing that we are the outliers and not the rule.
> Most developers don't want to use Linux at all.
(looks at the install numbers for Linux vs Windows in the server space) I'm not so sure.
We are the outliers. My co-workers hasn't even removed the awful Windows weather app and search bar from the taskbar.
Sorry for the snarky comment, but then those devs are simply bad. Windows is legacy, the future is in open source.
> Sorry for the snarky comment, but then those devs are simply bad
Yes. That is the majority of developers. I had to explain to a dev today (nice enough guy) that he has to actually run the tests.
> Windows is legacy, the future is in open source.
You can claim the future is opensource but the industry has moved towards SAAS, PAAS, IAAS which is even more lock in than using a proprietary OS such as Windows.
So while you might have an opensource OS, many of the programs you use will be proprietary in the worst way possible.
Could you please stop creating accounts for every few comments you post? We ban accounts that do that. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.
You needn't use your real name, of course, but for HN to be a community, users need some identity for other users to relate to. Otherwise we may as well have no usernames and no community, and that would be a different kind of forum. https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...
> Many developers don't even really know how to user a terminal and rely on GUI tools.
Fortunately, Linux users can also avail themselves of a graphical interface as well.
Your snark at my comments is completely unwarranted.
I really shouldn't have to explain what follows. But I will.
Installing any dev tooling that is third party is done on the command line. Look up the instructions for installing Node LTS on Debian, or .NET, or Golang. You need to use the command line. Even on easier to use Distros they have the same procedure. Depending on the tooling you may need to set additional environment variables which are normally done in your .bashrc or similar.
What normally happens is people blindly copy and paste things into the terminal and don't read the documentation. This has been a problem on Linux since before Ubuntu was released. This isn't just limited to newbies either.
The state of GUIs BTW isn't great. Many of them look nice, and work reasonably well most of the time, *until they don't* e.g. If I double click a deb to install it, sometimes it will install. Other times it won't. So I don't even bother anymore and just use dpkg/apt. BTW it isn't any better with other distros. So I have to drop to the command line to fix the issue anyway.
So at some point you will need to learn bash, learn to read the man pages, and manually edit configuration file. It is unavoidable on Linux.
> I am surprised Micrsooft didnt use the opportunity to create a micrsoft specific Linux distro
The last one didn’t do so hot, they named it “Xenix”
It was a lousy distro, it didn't even include the Linux kernel!
On the other hand, according to AT&T, Xenix accounted for about half of the worldwide Unix licenses in the late 1980s.
That wasn't Linux. Unix != Linux.
>About a month ago I heard Microsoft had their own Linux distribution to help Microsoft Windows users feel more at home. From memory, it was a rather simple GNOME setup. Nothing special.
You're confusing Microsoft's first-party Linux distro Azure Linux (nee CBL-Mariner) that is intended as a regular MS-supported OS for containers, VMs, servers, etc, with various Windows-like skins for Linux DEs that people have made for years.
You think Microsoft maintains an entire secret distro just for Windows people to feel 'at home'.
> You think Microsoft maintains an entire secret distro just for Windows people to feel 'at home'.
Sorry I dont understand the point you are making.
I did not suggest they had a "secret distro" I am suggesting they could have claimed a share of dominance in the Linux Distro as the default WSL distro.
I would venture a guess that the name recognition helps them. No developer wants to install a distro they’ve never heard of, but they do want to install Ubuntu. If WSL supports Ubuntu then they can cash in on that.
>Microsoft cannot dominate the Linux kernel but it can gain control in userland. Imagine if they gained traction with their applications being installed by default in popular distributions.
Yes, but how do they make money by doing this.
Unlike the socialist hiveminds that end up being behind the distros. Microsoft has salaries and bills to pay.
As far as I've always seen, everyone loves to leech on Microsoft's free stuff but nobody wants to pay for a product.
I do not claim to be a business expert but I dont think their success comes from just their Windows Operating System. Well, I would say the success for Windows is not about the profit but the control of users. If the majority are on Windows, they are unlikly to change habit to what they are familar with.
Besdies, for new PC/Laptops come bundled with Windows, Microsoft has made an agreement made with various retailers to come with Windows (Home edition) preinstalled. So in some ways, Windows is free for the User unless they pay for Professional edition, or whatever is offered today.
Of course, the average user will create a microsoft account to complete the install. :-)
Besides the Windows OS -- it is really the Services they provide.. Azure, Office365, SQL Server, PowerBI, etc. I would say THIS is where a lot of the money comes from... business willing to pay for them!
I work for Companies that are willing to PAY for these things - all for "Support"
If something goes wrong.. raise it with Microsoft. Even if I know what the problem is, it is all about the ticketing system. Throw it to Microsoft and carry on.
Despite the above, Microsoft also have "Free" software. They have started to Open Source many of their software.. allowing Linux support as well as Windows. Visual Studio Code, SQL Server, Powershell, etc.
It comes back to my point. When they presented WSL - they could have provided a "MS Linux" Distro, all promoted as "ease for Windows users" and if it became a popular distro, would have pushed micrsoft to have more control in userland... which would have alienated most Windows users away from Ubuntu, etc.
Like Windows, it is a method to keeping your userbase to rely on what they know overall.
Now I'm waiting for EDLIN but with unicode.
I remember you could use it in a batch file to script some kinds of editing by piping the keypresses in from stdin. Sort of a replacement for a subset of sed or awk.
I haven't tried but this should be possible with vi too. Whether that is deeply cursed is another question.
I think ed is what you’re looking for (possibly with -s).
Ironically, given the mention of vi, it is ex that is what red_admiral is looking for. (-:
This is just a "because I wanted to" project. And I get that; done a lot of those myself just to understand what the hell was going on. But the rewrite of turbo vision into FPC and compiling to half a dozen targets has been around for 20 years. Turbo vision is probably the best text mode windowing library in existence. The cool fun kicks in when you can map a whole text screen to an array like so: var Screen: Array[1..80,1..25] Of Byte Absolute $B800; // or something like that as i recall
What turbo vision brought to the game was movable, (non) modal windows. Basically a lot of rewriting that array in a loop. Pretty snappy. I made a shitload of money with that library.
For those curious, here is a modern port of the C++ Turbo Vision that also supports Unicode:
https://github.com/magiblot/tvision
lmao is any body still using turbo cpp?
You'll be surprised if I tell you several universities in India have not updated their curriculum in a very long time & Turbo C++ (& its non-standard C++ flavor) is the weapon of choice. The school board in the '00s, which preferred to teach a programming language for CS, used to have it curriculum around this C++ dialect. I have passed my high-school board examinations with this language (It was known to be already outdated in 2004. The smart kids knew the real C++ was programming by Visual Studio 6 ecosystem. But one had to still deal with it to clear the exams.)
Admitted, a few things have changed in last couple of years. MATLAB is being replaced by Python. Teaching 8085 & 8051 is being replaced by RasPi/Arduino. 8086 is taught alongside ARM & RISC, and not touted as SoTA.
I last saw Turbo being used in 2016-17 in a university setting, inside a DosBox (because Windows 7+ have dropped support for such old programs). Insane, but true.
Yeah, I also learned C++ via Turbo C++ in school in India in the early 2000s. Googling for "conio.h" shows Indians still talking about it in blogs and C/C++ forums as of 2024.
> You'll be surprised if I tell you several universities in India have not updated their curriculum in a very long time
I once asked an Indian colleague why Indians use US/UK-nonstandard English like "kindly", "do the needful", and "revert".
He thought about it a minute, then said "Oh, the texts everyone uses to learn English say that proper letters must always begin with 'Kindly,'".
Sokath, his eyes uncovered.
embrace. extend. deadend.
Nice. This editor could see a lot of use in such places if it gains developer-oriented features such as LSP, DAP and tree-sitter parsers. As a Rust-written editor, it will probably be quite a bit easier on resources than the usual modern choices which generally involve VSCode or Jetbeans plus language-specific plugins.
Core memory unlocked... When I was ~10-12, I asked my dad (who knew nothing but thought he knew everything about computers) how to make programs for Windows because I couldn't in QBasic. His answer was "with C++!". He came home with a book Learn C++ In 24 Hours that had Turbo C++ on a single 3.5" floppy disk. Naturally, that did not work, but I still had fun failing to compile every program I attempted to write.
OpenWatcom is the preferred choice of those still writing DOS applications, but there are those that still use Turbo C++ for the nostalgia.
Oh, a few years ago I wanted to write a simple program for dos. Since this is a Linux-only household otherwise, I was delighted to see OpenWatcom has a Linux port. I spent a good half hour trying to get a simple first version of the program I wanted to write running, but it always crashed right away. I simplified more and more until I basically arrived at hello world. On a hunch I ran the windows version of OpenWatcom with wine, and lo and behold, the program ran flawlessly! Once I googled that I found a couple of forum threads where people went like "yeah sure the Linux port produces broken binaries" because of course.
It's never the compiler until it's the compiler. Just didn't expect it during some simple fun coding at home. :)
I wish I was.
array[1..25, 1..80] of Word absolute $B800:0000.
Arrays in TP were laid out in row-major order, and each character was represented by two bytes, one denoting the character itself and the other the attributes (foreground/background color and blinking). So, even better, array[1..25, 1..80] of packed record ch: char; attr: byte end absolute $B800:0000.
Replace $B800 with $B000 for monochrome text display (mode 7), e.g., on the Hercules.
I am curious about how you made money with it, if you don't mind sharing.
My first company out of uni was a company that sold a tv advertising application written in dos. It did all the reports, put together spot advert packages, measuring reach and frequency, cost per point, etc. Used Neilsen ratings for data. The company at the time paid commissions along with salary to programmers. The app still lives on in windows, but I've been out of that game for decades. Written in TP for dos, then Delphi for windows.
Honestly dude, this is clever. Good on you for finding an opportunity to make a useful tool and you made out like a bandit in the process. :)
That was the toolchain that my company used. Turbo vision was a Borland product, back when Philippe Khan was running the company. We were that ahead of the curve for "shrinkwrapped software development" at the time. That legacy, Delphi and FPC still maintain the standard for desktop, native dev, really for the last 30 years.
Kahn is still pretty active: https://philippekahn.com/
I skimmed the blog. I detect LLM output, but no person.
I think we're going to have to get used to feeling like this. Makes me sad a little.
The broadcast industry continues to move at a slow pace compared to IT, and understandably - it's live TV, you need things to work 24/7/365.
That means there's always an opportunity for the resourceful.
Every time I see a new modern TUI framework, my disappointment is the same: "Oh. This isn't as good as Turbo Vision."
I always thought Charm was pretty robust as far as TUI tools go[0]
[0]: https://charm.sh/
Turbo Vision was truly immersive. I used it in Turbo C and also in Paradox 4.5.
So good.
Funny how something that ran in a tiny box on a 386 could feel more responsive than some modern GUIs. Turbo Vision really nailed the basics.
I swear the characters appear on the screen before I press the keys when I'm in a real (not emulated) Linux terminal. I can't feel the lag as I'm typing this comment into a basic textarea, but it's clearly there, because a terminal feels magical.
> This is just a "because I wanted to" project.
It's not. They needed a small TUI editor that was bundled with Windows and worked over ssh.
https://news.ycombinator.com/item?id=44034961
Well, I don't have the rights to bundle anything with windows, nor would I want to. All you'd need is a thin player to reproduce a TUI screen if done in FPC, and it wouldn't be limited to Windows. All I'm suggesting is we tend to have some recency prejudice in our development, even when it costs more time/money than it should. I'm sure I've done the same over the years.
I'd love for an interface like that on VSCode that runs in a terminal even remotely.
Fun. I must admit I don't really know who this is for, but it seems fun.
It's for people that want to use the Windows Terminal to edit files. The old `edit` command has been unsupported on Windows since 2006, so there was no Microsoft-provided editor that could be used in the command line since then.
It's impressive to see how fast this editor is. https://github.com/microsoft/edit/pull/408
> By writing SIMD routines specific to newline seeking, we can bump that up [to 125GB/s]
Is... this a meaningful benchmark?
Who's editing files big enough to benefit from 120GBps throughput in any meaningful way on the regular using an interactive editor rather than just pushing it through a script/tool/throwing it into ETL depending on the size and nature of the data?
At work we have to modify some 500 MB XML's every now and then, as the source messes them up in non-repeating ways occasionally.
Typically we just hand edit them. Actually been pleasantly surprised at how well VS Code handles it, very snappy.
I use the CudaText for big files. It is like open source Sublime.
For a text editor, yes, absolutely.
As developers, we rotinely need to work with large data sets, may it be gigabytes of logs, csv data, sql dump or what have you.
Not being able to open and edit those files means you cant do your job.
I work exclusively outside of the windows ecosystem, so my usual go to would be to pipe this through some filtering tools in the CLI long before I crack them open with an editor.
You could make an argument for emacs, but probably not using emacs as a pure text editor.
But are you really trying to do that on a Windows Server? I feel like there are better tools for the job.
"Better tools for the job" isn't always "the tool currently bringing in the $$$$$$$$$$$$$$$". So you live with it.
Sure, maybe by switching to linux you can squeeze out an extra CPU core's worth of performance after you fire your entire staff and replace them with Linux experienced developers, and then rewrite everything to work on Linux.
Or, live with it and make money money money money.
> make money money money money.
Subject, of course, to Microsoft allowing you to continue to use their software.
I have to scroll through huge files quite frequently, and that's the reason I have Sublime Text installed, as it deals with them very well.
less works pretty well if you don't need to edit the files.
I have FAR installed for the same reason.
Who cares? It’s fun. Programming can be fun.
To turn this around, you can have fun and ask if something is meaningful or not outside the fun at the same time. If it is, great. If it's not, no harm.
I'm not saying that doing this can't be fun, or even good to learn off of, but when it's touted as a feature or a spec, I do have to ask if it's a legitimate point.
If you build the world's widest bike, that's cool, and I'm happy you had fun doing it, but it's probably not the most useful optimization goal for a bike.
Not a great analogy. This editor is really fast. Speed is important, to a point. But having more of it isn't going to hurt anything. It is super fun to write fast code though.
I don't think a sentence in a big report page is counted as touting.
Not on the regular, but there are definitely times I load positively gigantic files in emacs for various reasons. In those times, emacs asks me if I want to enable "literal" mode. Don't think I'd do it in EDIT, though.
As a specific benchmark, no. But that wasn't the point of linking to the PR. Although the command looks like a basic editor, it is surprisingly featureful.
Fuzzy search, regular expression find & replace.
I wonder how much work is going to continue going into the new command? Will it get syntax highlighting (someone has already forked it and added Python syntax highlighting: https://github.com/gurneesh9/scriptly) and language server support? :)
Right, these are more useful features, IMO, than the ability to rip through 125GB of data every second. I can live without that, but syntax highlighting's a critical feature, and for some languages LSP support is a really big nice-to-have. I think both of those are, in this day and age, really legitimate first-class/built-in features. So are fuzzy searching and PCRE find&replace.
Add on a well-built plugin API, and this will be nominally competitive with the likes of vim and emacs.
Challenge. Accepted.
That's pretty handy. I was having to use bash -c "vi myfile.txt" which was a bit annoying.
If you were doing that to invoke WSL, note that these days you can do `wsl command arg arg arg...`
Probably more like need to use it. Basically nano for windows
It’s right there in the readme actually:
> The goal is to provide an accessible editor that even users largely unfamiliar with terminals can easily use.
That may be the written goal, but I doubt that's the actual reason the project exists.
Yeah ... I don't think there's any overlap between "users largely unfamiliar with terminals" who want something easy to use, and 'Linux users who are sufficiently technical that they would even hear about this repo'.
Here's a scenario. You're running a cluster, and your users are biologists producing large datasets. They need to run some very specific command line software to assemble genomes. They need to edit SLURM scripts over SSH. This is all far outside their comfort zone. You need to point them at a text editor, which one do you choose?
I've met biologists who enjoy the challenge of vim, but they are rare. nano does the job, but it's fugly. micro is a bit better, and my current recommendation. They are not perfect experiences out of the box. If Microsoft can make that out of the box experience better, something they are very good at, then more power to them. If you don't like Microsoft, make something similar.
> You need to point them at a text editor, which one do you choose?
mcedit ?
> You're running a cluster, and your users are biologists producing large datasets. They need to run some very specific command line software to assemble genomes. They need to edit SLURM scripts over SSH. This is all far outside their comfort zone. You need to point them at a text editor, which one do you choose?
Wrongly phrased scenario. If you are running this cluster for the biologists, you should build a front end for them to "edit SLURM scripts", or you may find yourself looking for a new job.
> A Bioinformatics Engineer develops software, algorithms, and databases to analyze biological data.
You're an engineer, so why don't you engineer a solution?
The title is a bit confusing depending how you read it. Edit isn't "for" Linux any more than PowerShell was made for Linux to displace bash, zsh, fish, and so on. Both are just also available with binaries "for" Linux.
The previous HN posts which linked to the blog post explaining the tool's background and reason for existing on Windows cover it all a lot better than a random title pointing to the repo.
TIL PowerShell exists for Linux.
But.. why?
Well, parts of it do, anyway.
As with .net, it is not intended to let you easily get away from Microsoft.
https://learn.microsoft.com/en-us/powershell/scripting/whats...
Well why not?
Is there supposed to be a single elected shell for Linux? Powershell on Linux is just one of plenty others.
I'm not against it. Absolutely go for it.
I just wonder what was the reason to port it and then I would like to have a word with a real living person who is actually using that shell.
PowerShell lends itself really well to writing cross-platform shell scripts that run the same everywhere you can boot up PowerShell 7+. It's origins in .NET scripting mean that some higher-level idioms were already common in PowerShell script writing even before cross-platform existed, for instance using `$pathINeed = Join-Path $basePath ../sub-folder-name` will handle path separators smartly rather than just trying to string math it.
It's object-oriented approach is nice to work with and provides some nice tools that contrast well with the Unix "everything is text" tooling approach. Anything with a JSON output, for instance, is really lovely to work with `ConvertFrom-Json` as PowerShell objects. (Similar to what you can do with `jq`, but "shell native".) Similarly with `ConvertTo-Json` for anything that takes JSON input, you can build complex PowerShell object structures and then easily pass them as JSON. (I also sometimes use `ConvertTo-Json` for REPL debugging.)
It's also nice that shell script parameter/argument parsing is standardized in PowerShell. I think it makes it easier to start new scripts from scratch. There's a lot of bashisms you can copy and paste to start a bash script, but PowerShell gives you a lot of power out of the box including auto-shorthands and basic usage documentation "for free" with its built-in parameter binding support.
I believe this was the original announcement https://azure.microsoft.com/en-us/blog/powershell-is-open-so.... I have used it on Linux and it is included by default in Kali and ParrotOS.
It's a windows 11 terminal editor. Don't get confused by the fact that it also works on Linux.
I dunno, I spent a lot of years (in high school at least) using Linux but being pretty overwhelmed by using something like vim (and having nobody around to point me to nano).
EDIT.COM, on the other hand... nice and straightforward in my book
I dunno, I use edit since I've heard of it instead of figuring out why my vim config breaks on windows
I might use nano via wsl (Or at that point just nvim), but that also has it quirks
It occupies the same space as micro did for me, but it's / it will be preinstalled so it's better (Also a reason I even cared for vi at first)
There's no shortage of less technical people using nano for editing on Linux servers. Something even more approachable than that would have a user base.
Especially noting it's a single binary that's just 222kb on x86_64— that's an excellent candidate to become an "installed by default" thing on base systems. Vim and emacs are both far too large for that, and even vim-tiny is 1.3MB, while being considerably more hostile to a non-technical user than even vim is.
I can definitely see msedit having a useful place.
Midnight commander comes with mcedit.
well the editor was obviously designed primarily for Windows, not sure why the title says Linux
My guess would be there are some people at MS who, somehow, still can do something fun. Because they are not assigned on the another project on how to make OOBE even more miserable.
/rant Today I spent 3 (three) hours trying to setup a new MSI AIO with Windows Pro. Because even though it's would be joined to the local ADDS and managed from there - I need to join some Internet connected network, setup a 3 stupid recovery questions which would make NIST blush and wait another 30 minutes for a forced update download which I cannot skip. Oh, something went wrong - let's repeat the process 3 times.
Perhaps those are the things that doesn’t take a Ph.D to develop.
There are already plenty of those, such as jed, mcedit, etc.
This particular application is incredibly basic -- much more limited than even EDIT for DOS.
Nano gang
I’ll gladly replace vim with it, especially if it has/gets LSP support or searching via ripgrep. I’m using Helix now but like a good tui.
this is for me, as saner replacement for nano in the terminal, since i hate vi.
It's a huge improvement over notepad
I thought this might work with Enter-PSSession but it unfortunatelly produced Error 0x80070006: The handle is invalid.
Insane that we don't have TUI in remote session in 2025.
Agree 100%, that's the biggest difference I feel whenever I'm doing remote Powershell vs. ssh, it always feels like a struggle just to make basic file modifications.
I appreciate the sense of humor coming from this project. F.ex. from the release note: "As Steve Ballmer famously said: Fixes! Fixes! Fixes!".
Refreshing to see employees can have fun in a multi billion dollar company.
I was hoping this would work over ssh in a macOS Terminal.app, but last I tried it was inserting all kinds of weird characters into the edited text files.
Windows ships an official OpenSSH server these days, but so far there haven't been any good official text editors that work over OpenSSH, as far as I know.
I've had to resort to "copy con output.txt" the few times I needed to put things into a text file over windows-opensshd...
Maybe using ucs2 encoding, instead of utf8?
Discussion (271 points, 1 month ago, 185 comments) https://news.ycombinator.com/item?id=44031529
Back in 1993, I would open up binary files in edit and enjoy seeing hearts.
That, the DOS defrag visualisation, and the hex-editing my own savegames is pretty much why I'm a developer today.
Ah, the memory of going to bed while the 500MB harddrive defrags over-night, sleeping next to the endless clicking and spinning
I came here assuming that this was a clone of Gemini CLI (which is a clone of Claude Code). Both pleased and disappointed to be wrong.
The lengths people will go just to not have to learn Vim continues to surprise me.
Fun project #1: Get the binary size comparable to EDIT.COM
Fun project #2: Port to MS-DOS (with DPMI)
Fun project #3: Port to 16-bit MS-DOS (runs on original 8086)
How long until it becomes Copilot 365 Edit?
> This editor pays homage to the classic MS-DOS Editor
Oddly, it looks more like Borland's editor.
TurboVision had horizontal scrollbars. Edit never did.
https://arstechnica.com/gadgets/2025/06/microsoft-surprises-...
The screen shot says differently.
It's nice to see an editor that explicitly isn't an IDE and is more something like notepad. often when editing config files and the such, it's more convenient to use something like notepad than it is to use something like VSCode.
This editor doesn't have delusions of grandeur, it focuses on usability more than features. and it is better for it.
fake. it's not blue.
Believe you could change the background color in DOS edit.
If you are a idea guy and want your idea real , all you have to make is a fency fake trailer anouncing it, then watch the company create it, the suits are this empty anyone external can powersteer those ghostships.
oh, the memories. gorilla.bas :)
It can still be played in a browser: https://archive.org/details/GorillasQbasic
also, nibbles.bas
Do Edlin next.
Reminds me of my days on a support line.
"Type edit autoexec.bat....." etc
And DEBUG. It's sad we no longer have a debugger/(dis)assembler/binary editor bundled with Windows. That thing was tiny but you could do so much with it.
I love TUI's lately.
I just wish this was on nixpkgs
Any Flatpak or Snap editions?
Nano is completely ok for this. Nope, let's burn some cash and reinvent the wheel.
Instead of donating to Nano devs, or hire some of them or something.
Stupid corp at their finest.
Nano is a bare-bones editor with its own wacky key-bindings. What is the point? If I want to learn new key-bindings I'll learn a full-featured editor.
msedit's key-bindings are based on IBM CUA. It's immediately familiar to a great many people.
Nano can be configured with CUA keybindings, I do it.
But I’m glad someone wrote one of these in rust.
story in the news already https://digitrendz.blog/newswire/technology/19460/microsoft-...
I love edit.
It was my favorite editor back in the old days.
It worked, did the basics really well and got the job done. Glad to see it’s back.
Was hoping for MS-DOS editor with built-in LLM.
I do not understand the love this project is getting. It was a shit editor when it was introduced; we'd all started using something else years before.
agreed. it's nostalgic sure but it's not a good editor.
It’s for fixing boot.ini on a broken headless vm at 2am, not for software development.
No musl binary. For glibc Linux distributions only perhaps.
Just build it yourself. Or you can install glibc on both Void and Alpine, if you want the pre-built binary.
What about BSD
Maybe this makes up for them destroying Notepad for some people.
Microsoft loves to own general terminology like "edit" for their products. I have no idea how this flys.
SqlServer like it's the one that found sql or it's the only product that serves sql.
The copy command is called "copy" which kind of makes sense? I remember once seeing a colleagues .bashrc with things like "alias copy=cp". Flags won't work the same way of course.
Sure "chcp" is a mouthful, but "del" or "erase" makes as much sense as learning that "rm" is short for remove. You pick up either convention quickly enough, except that I'm constantly using "where" when I meant "which". Maybe I should make an alias or something.
Don't get me started on powershell's look-we-can-use-proper-words-lets-see-how-long-we-can-make-this.
I know what you mean but everyone does this.
Apple has Pages, Numbers, Keynote, etc. Google has Drive, Docs, Sheets, etc. Meta has Messenger. Far too many examples to list.
Conversely, it would be ridiculous to use non-obvious names.
>I have no idea how this flys.
They aren't trademarking it and probably can't.
But there's no reason they anyone can't use generic naming for their products. Many software applications do and quite frankly its more descriptive to attracting new users than coming up with non-real names.
I would aruge the only reason made up names exist is to keep marketing departments employed trying to explain to users what they are needlessly.
Can this be ported to MacOS?
It already works? There just isn't an official build yet - just `cargo run` yourself.
Its pretty easy to build there, I've tried this on MacOS and Linux.
The one thing that vexed me for something based on edit, was CTRL+P being hijacked for something that isn't print, is like we forgot about about CUA over the last 15 years.
Not an emacs user I assume?
It works on MacOs since it was published some two weeks ago. I have been using it since then.
219232 bytes binary if compiled for Apple silicon (ARM).
It will take more than nostalgia and rust to tear me away from my neovim setup that has been built up/improved on over the years. Lsp, dap, autocompletion, aliases and bindings for each programming languages. Lazily loaded of course so it’s still snappy.
Manage configuration, and external dependencies such as lsps with nix.
Then have separate nix shells for each project to load tooling and other dependencies in an isolated/repeatable session. Add in direnv to make it more seamless development experience.
You are not the target audience. This is aimed at casual users and beginners, and it's already in a good shape to replace nano with its user-friendly, mouse-enabled TUI.
>in a good shape to replace nano
...
Anyways, here's how to tell if your LED sign is cheap!
Huge +1, same for me, but with GNU Emacs instead of neovim but can completely appreciate the philosophy.
I don't understand why they want to go with DLLs for scripting instead of WASM + wamr which is really small. Maybe I'm just really inexperienced in this space.
The possibility of using WASM was at least under discussion last month:
https://github.com/microsoft/edit/issues/17
Needs LSP and Tree-Sitter :)
If the editor focused on becoming an IDE instead of being idiot-proof and cleanly designed, what would distinguish it from the 1,000 other editor projects?
and scripting :)
I'd quite like to see VSCode for the terminal.
Oh yes, it needs VBScript support ;)
Another Microsoft nerd-washing project.
'nerd-washing' such a great term
Runs on Windows too! It has "Redo", not to be confused with "Undo Undo". Unfixed-width Tabs are a huge leap forward. LF, sans CR, will cut your file sizes in half.
Did anyone ask for yet another a Microsoft editor on your Linux machine?
Meanwhile, they forced AI Copilot bloat into Notepad, whose singular use-case was supposed to be that it does one thing well without unnecessary features.
Unfortunately, the new Edit isn't safe from such decisions.
While Satya might have made the change Microsoft <3 FOSS, the Gates/Balmer era was much better towards Windows developers.
Now we have a schizophrenia of Web and Desktop frameworks, and themselves hardly use them, what used to be a comfortable VS wizard, or plugin, now is e.g. a CLI tool that dumps an Excel file, showing that newer blood has hardly any Windows development culture, or their upper management.
I don't know how many people don't know this, but now you actually can't release app on Windows without it showing your warning while installing unless you sign it with EV certificates, which cost upwards of 500$ for a year.
As you may have guessed, this simply pushes out smaller devs. This used to NOT be like this. It should NOT be like this.
I would highly recommend looking into Azure code signing. It is confusing to set up. But comes with instant reputation and ”only” costs 10$ a month.
EV certificates has always felt like an utter scam and extortion to me. At least now there is an alternative.
Unfortunately Apple normalised it, first with the iPhone. There are upsides (theoretically - less trash apps), but the review/curation process doesn't scale, and yep - the small devs are effectively told to bug off.
10 years ago I wanted to build a Love2D game, and release it for the three major OS's. The .love files are effectively ZIP archives, kinda like cartridges, but you need the correct Love2D version (they broke API compat every year or so). Windows and Mac used to be: "cat love.exe game.zip > game.exe".
Linux gave me the most crap, because making a portable, semi-static build was a nightmare; you couldn't rely on distros because each one shipped a different version of love.
Now Linux is actually becoming more viable, not because it's making that much progress, but because the two mainstream platforms are taking steps back.
At least signing is free for MacOS apps. And back when I used it only $100/year for iPhone apps.
... where is signing free for macOS apps?
You can use an ad-hoc signature to sign, but people who download the app will still have to jump through hoops to run it.
Good. This might suck for opensource devs, but for normies that might get a random exe link this is good. I've gotten numerous phone calls from relatives when they try to run some unrecognized app, most of the time is benign, but on few occasions it was something malicious.
It's a heavy tax to protect the ignorant. I hear things like this and think how I've been using a computer for nearly 4 decades and it's never once happened to me. Maybe those types of people need to re-evaluate their technology choices (maybe iPad is more appropriate) instead of taxing the entire ecosystem to protect them from themselves.
low income countries don't have the money for iPads. My parents run on a 300 Euro computer bought 5 years ago. My dad is technical enough to get around a computer, but he's in his 60s now. My mom can open Facebook and youtube. Sometimes either of them downloads stuff, and opens them. So your solution is "make millions consumers spend $$ on overpriced hardware and even more closed off system, so few hundred open source devs don't spend 500$ to verify their app (which they will have to do if they want to release on the iOS platform either way)" Ain't no way.
If you count the number of ignorant people who use Windows versus the people like you, you'll quickly realize the tax is very cheap for the level of protection it offers to the number of people it offers it to.
The correct answer should be a legally-mandated one-time escape hatch.
Bury it as deep as Microsoft wants, but...
A popup warning is not a heavy tax.
> Good. This might suck for opensource devs, but for normies that might get a random exe link this is good
That random exe link is signed by Microsoft.
Now I get why a project I work on is signed for Apple and not for Windows... 5x the price, jeez
There are currently no ideal native app development frameworks on Windows. WinForms is the closest thing
I'm so glad to hear that from someone unprompted. I tried WPF and it was a million times harder to use than WinForms, and I couldn't even be bothered to try out MAUI (although I accept it as an apology for WPF lol). I'm still using a WinForms application every day (Git Extensions) and have been able to contribute to it not least because it's the good old familiar WinForms.
This is not to say that WinForms isn't without its problems. I often wonder what it could be like if all the effort of making WPF and MAUI had gone into maintaining, modernizing and improving it.
I think that the native GUI development APIs provided by OS vendors need a kind of "headless" implementation first, where you can build UI in pure code like winforms, and then they should offer a framework on top of that. I, personally, hate XAML. It's stricter than HTML/CSS and very opinionated about how to organize your application. I feel that XAML frameworks should have a common Winforms-like API behind them that you can switch to any time you want. But I've found that using the C# code-behind APIs manually for WPF, UWP, MAUI, etc, is far more verbose than Winforms was.
My only major problem with winforms is that it's still using GDI under the hood which, despite what many people believe, is actually still primarily software-rendered. If they could just swap out Winforms for Direct2D under the hood (or at least allow a client hint at startup to say "prefer Direct2D") it would really bring new life to Winforms, I think.
I would also like a C++ native GUI API that's more modern than MFC
"C# Markup" [1] [2] sounds a lot like what you are looking for. As the only "second party" option in this space it's interesting that it is so MAUI only/MAUI focused, but I suppose that's the "new hotness".
There have been similar F# libraries and third-party C# libraries for a while that seem nice to work with in similar ways.
[1] https://learn.microsoft.com/en-us/windows/apps/windows-dotne...
[2] https://github.com/CommunityToolkit/Maui.Markup
Unfortunately that is something Microsoft seems incapable of.
MFC was already relatively bad versus OWL. Borland[0] kept improving it with VCL and nowadays FireMonkey.
There there is Qt as well.
Microsoft instead came up with ATL, and when they finally had something that could rival C++ Builder, with C++/CX, a small group managed to replace it with C++/WinRT because they didn't like extensions, the irony.
With complete lack of respect for paying customers, as C++/WinRT never ever had the same Visual Studio tooling experience as C++/CX.
Nowadays it is in maintenance, stuck in C++17, working just good enough for WinUI 3.0 and WinAppSDK implementation work, and the riot group is having fun with Rust's Windows bindings.
So don't expect anything good coming from Microsoft in regards to modern C++ GUI frameworks.
[0] - Yes nowadays others are at the steering wheel.
Borland was pretty good on the GUI front, I think we're forgetting how easy it was to get something rolling in Delphi. It's baffling Microsoft still hasn't gotten their stuff together on this. They've been just releasing new frameworks since the WinRT era and hoping something sticks.
Microsoft's GUI problem is two-fold.
Firstly, that nobody believes them when they swear that {new GUI framework} will be the future and used for everything. Really. Because this time is not like those other times.
Secondly, pre-release user feedback. Ironic, given other parts of Microsoft do feedback well.
Imho, the only way MS is going to truly displace WinForms at this point is to launch a 5-year project, developed in the open, and guided in part by their community instead of internally.
And toss a sweetener in, like free app signing or something.
Agreed it is the easiest, however it is also possible to use WPF on the same style as Forms, with more features, no need to go crazy with MVVM, stay with plain code behind.
Having said this, from 3rd parties, Avalonia is probably the best option.
While I think Uno is great as well, they lose a bit by betting on WinUI as foundation on Windows, and that has been only disappointment after disappointment since Project Reunion.
We spent the better part of a calendar year researching what framework to update our MFC app to. We really liked the idea of staying first-party since our UI is explicitly Windows-only, and we looked at every framework - MAUI, winforms or WPF with a C# layer, WinUI3...
It quickly became apparent that WinUI3 was the only one even close to viable for our use case, and we tried getting a basic prototype running with out legacy backend code. We kept running into dealbreakers we hoped would be addressed in the alleged future releases, like the lack of tables, or the baffling lack of a GUI UI designer (like every other previous Win framework).
...We're currently writing our GUI in Qt.
The new Edit.exe is indeed safe from those things.
A requirement for the tool is that it must remain as small as possible, so that it can be included in the smallest distributions of Windows, like Nano Server. It is the rescue text editor there.
I’m sure plugins are going to do all the things that everyone doesn’t want (or does want) but the default edit.exe will remain small, I’d bet money on it.
I recall reading Raymond Chen mentioning that Notepad actually gets more exercise than you'd expect; it's a common guinea pig for features: https://devblogs.microsoft.com/oldnewthing/20180521-00/?p=98...
I took a screenshot and pasted it into the new Win11 Paint. Even minimized, Paint was constantly using 5% CPU and sitting at ~250MB of RAM. I guess I can begrudgingly get over the RAM, but squandering the CPU like that is ridiculous.
What happened to pride or quality control or anything?
>What happened to pride or quality control or anything?
Sounds like some dangerous cowboy coding wrongthink you've got going on over there.
I assume it is basically an Electron app now. Anything will be bloated when it's a full web browser running some webapp.
No, it doesn't use Electron or webviews afaik. The UI is mostly WinUI2/UWP/XAML Islands based however
> What happened to pride or quality control or anything?
We are talking about Microsoft here.
It sucks so much - my ISP had an intermittent outage (some IPv4 / MTU issues), and I couldn't save files in Notepad without disabling it.
I was literally trying to configure Wireguard to get around the ISP issues.
I stopped using Notepad since they introduced tabs.
And if you uninstall the modern notepad, start's search doesn't find the old one.
As if start's search ever found anything...
Thank you for this!
I had to open Notepad and see it for myself. Wow! I see the Icon.
I remember Co-pilot just suddenly appearing in my taskbar and finding it annoying. Despite removing it, I still see it lurking around... and now I see it is a SIMPLE TEXT EDITING PROGRAM named Notepad.
Wow.
Every product has bizarre bloat. I understand things might get heavier over time with new features, but Office from like 20 years ago still works pretty great. In fact, I don’t even really see any new features that are missing in my normal use case. Actually, anything that DOES exist in a newer version is something I actively DO NOT want. For example, monthly/yearly subscriptions, popups that interrupt typing to advertise some new bloat, and dedicated buttons to import any file into a powerpoint presentation or email.
Look at Outlook. Literally less than 25% of the screen appears to be dedicated to email content. I say literally because I physically measured it and from what I remember it was 18% to 20%. Microsoft keeps adding these gigantic toolbars that each have duplicate buttons that often can’t really be adjusted, removed, or hidden. Or it may be an all-or-nothing scenario where something can be removed but then you can’t e.g. send emails.
Rather than fixing the problem, the solution is to add a new toolbar. This frequently keeps happening. Just one more toolbar with a select subset of buttons in one place so people can find it. Well now… We have some extra whitespace… Let’s throw in the weather there and why not put the news in too. What could possibly go wrong?
And then loading the news, some totally unrelated and non-critical feature they shove in forcefully by default frequently has at least one critical severe bug where there’s an async fetch process that spikes the cpu to max and crashes the whole system. There’s no way to disable news without first loading outlook and going into advanced settings, which of course is past the critical point of the news being loaded.
Go look at like Outlook 2003. It is nearly perfect. It’s clean, simple, and there’s no distractions. This is so amazing, like many Microsoft products that seem to be built by engineers, but I don’t know how we get to modern outlook that feels like it has 10 to 50 separate project manager teams bloating it up often with duplicate functionality.
This would be bad enough, but then again instead of fixing it like I said before or fixing it by reducing or consolidating teams or product work, we get ANOTHER layer of Microsoft bloat by having multiple versions of the same product. So we have Outlook (legacy) named that way to make you feel bad for using an old version, or named to scare you into believing it won’t be supported. Then there’s Outlook (New). Then there’s Outlook (Classic) which isn’t legacy or new but is a weird mix of things. Then there’s a web version that they try to force everybody into because it’s literally perfect and there’s no reason not to use it… Somehow they didn’t catch that emails don’t load in folders unless you click into them, or sorting rules don’t work the same or don’t support all the same conditions. Rather than fixing it, you get attacked for using edge case frivilous advanced obscure functionality. Like who would want to have emails pre-sorted into any folder except inbox? Shame on you for using email wrong I guess.
I’ll skip over the part where there’s multiple versions of the multiple forks of outlook. But there’s also Government, Education, Student, Trial, Free, Standard, Pro, Business, Business pro, Business premium, etc.
The last infuriating point in my rant has to come down to their naming standards. For some reason they keep renaming something old to a completely new name and of all the names they could pick, it’s not only something that already exists but it’s another Microsoft product. This is a nightmare trying to explain to somebody who is only familiar or aware of either the old or the new name and this confusion is often mixed even on a technically capable and competent team. For bonus points, the name has to be something generic. Even like “Windows” which is not a great example because the operating system is so popular but you can imagine similarly named things causing search confusion. Or even imagine trying to search for the GUI box thing that displays files in a folder within the operating system, also called a window, and try to imagine debugging an obscure technical problem about that while getting relevant information.
There’s so many Microsoft moments that things like adding AI to notepad hardly phase me anymore. I don’t like that they do that but I wouldn’t necessarily be so offended if their own description they came up with in the first place was what you mentioned. Constantly going against their own information they invented themselves and chose to state as a core statement just irritates me.
Since you mentioned Outlook.. at work we use Outlook 2019 and its exactly like you mention.
The user interface is littered with useless crap, the File menu goes back to this weird completely new different UI layout etc etc.
And the best part is that if the VPN goes temporarily down it fails to send/receive new emails until it has been restarted.
Let me say that again.
It fails at its core functionality if there's a glitch in the network and cannot send or receive emails. That's just a next level of incompetence.
Apparently you weren't there when Lync^H^H^H^HSkype for Business would retry an incorrect password infinitely, until it locked a user account.
Yes, even when running on an unconnected session on a Windows server/VDI somewhere.
> The last infuriating point in my rant has to come down to their naming standards. For some reason they keep renaming something old to a completely new name and of all the names they could pick, it’s not only something that already exists but it’s another Microsoft product.
Microsoft has seemingly sucked at naming things since at least the mid-90s. It's effectively un-search-engine-able, but I recall that in the anti-trust action in the mid-90s a Microsoft person was trying to answer questions about "Internet Explorer" versus "Explorer" (as-in "Windows Explorer", as in the shell UI) and it was a confusing jumble. Their answers kept coming back to calling things "an explorer". It made very little sense. Years later, and after much exposure to Microsoft products, it occurred to me that "explorer" was an early 90s Microsoft-ism for "thing that lets you browse thru collections of stuff" (much like "wizards" being step-by-step guided processes to operate a program).
Also, playing-back my "greatest hits" comment re: Microsoft product naming: https://news.ycombinator.com/item?id=40419292
I just googled some screenshots of outlook 2003 and I felt peace.
[dead]
Why should “ms-edit” be avoided ?
[dead]
[dead]
It'd be nice if they didn't recommend winget for installation though. winget is an egregious security risk that Microsoft has just like pretended follows even minimal security practices, despite just launching four years ago with no protection from bad actors whatsoever and then never implementing any improvements since.
disclaimer: I used to commit to winget a lot and now I don’t.
…but is it really less secure than brew or choco? The installers are coming from reasonably trusted sources and are scanned for malware by MS, a community contributor has to approve the manifest changes, and the manifests themselves can’t contain arbitrary code outside of the linked executable. Feels about as good as you can get without requiring the ISVs themselves to maintain repos.
The installers are coming from random people on the Internet. Most software repositories have trusted contributors and a policy of requiring a piece of software be arguably worthy of inclusion. Perhaps because Microsoft is afraid to pick winners, every piece of garbage is allowed on winget, and there's no way to restrict who can make changes to what packages.
There are ISVs that would like to lock down their software so they can maintain it but a trillion dollar company couldn't spare a dollar to figure out a "business process" to do this. As far as I know, Microsoft has a single employee involved who has laughed off any security concerns with "well the automated malware scanner would find it".
The "community contributors" were just... people active on GitHub when they launched it. Was anyone vetted in any way? No.
The Microsoft Store has actual app reviewers, winget has... "eh, lgtm".
The policy of including the author's name next to the project name, along with some indication that it really is the author and not an imposter, I think that's probably the best we're ever going to get, since at that point it just comes down to community trust.
winget is just Windows developers' version of curl | bash. Yet another example of Microsoft copying Linux features.
Windows already has
irm <URL> | iex
Except curl | bash definitely executes code by the author controlling the URL you put in, and if the URL is HTTPS, in a reasonably secure fashion.
There is no validation when you winget whether or not the executable is from the official source or that a third party contributor didn't tamper with how it's maintained.
> in a reasonably secure fashion
It's trivial for a remote server to hand two different versions of a script with the traditional `curl | bash` pipeline. https://lukespademan.com/blog/the-dangers-of-curlbash/
There is 0 validation that the script that you are piping into bash is the script that you expect. Even just validating the command by copying and pasting the URL in a browser -- or using curl and piping into more/less is not enough to protect you.
>> Except curl | bash definitely executes code by the author controlling the URL you put in, and if the URL is HTTPS, in a reasonably secure fashion.
> It's trivial for a remote server to hand two different versions of a script with the traditional `curl | bash` pipeline.
I’m confused by this; it seems to be written in the tone of a correction but you both seem to be saying that you get whatever the server sends. (?)
> you both seem to be saying that you get whatever the server sends
Yes, but I am also saying that you can't verify that the script that is run on one machine with a pipe is the same script that runs on a second machine with a pipe.
The key part of the original statement is the server can choose to send different scripts based on different factors. A curl&bash script on machine 1 does not necessarily mean the same curl&bash script will be run on machine 2.
The tooling provided by a `curl | bash` pipeline provides no security at all.
With winget, there is at least tooling to be able to see that the same file (with the same hash) will be downloaded and installed.
There are ways to do this better, for example, check out https://hashbang.sh. It includes a GPG signature that is verified against the install script, before it is passed to curl.
The parent is talking about MITM, which is prevented with TLS and curl but not winget. They are saying curl is strictly better, not that it is impenetrable. If you trust the domain owner, you can trust curl | bash, but you can't trust winget
Why can't I trust winget?
It's not hard to run the `show` command to see what a winget install will do. https://learn.microsoft.com/en-us/windows/package-manager/wi...
It's easy enough to view the manifests (eg, https://github.com/microsoft/winget-pkgs/blob/2ecf2187ea0bf1...) and arguably, is better then the protection for MITM that you would get using naked cURL & Bash, simply because there are file hashes for all of the installer files provided by a third party.
> They are saying curl is strictly better, not that it is impenetrable
Right. But it arguably is not strictly better.
> You can't trust winget
Again, this is not backed up by anything. I have trust in winget. I can trust that the manifest has at least been vetted by a human, and that the application that will be installed should be the one that I requested. I can not trust that this will happen with curl | bash. If the application that is installed is not the one that I requested, there is tooling a process to sort out why that did not happen, and a way to flag it so that it doesn't happen to other users. I don't have this with curl | bash.
If you think HTTPS is performing code validation I have news for you.
HTTPS only guarantees the packets containing the unverified malicious code are not tampered with from the server to you. A server which could very well be compromised and alternate code put in its place.
You are drawing an egregious apples-to-oranges comparison here. Please re-read what you said.
You could serve digitally signed code over plain HTTP and it would be more secure than your example over HTTPS. Unfortunately there are a lot of HTTPS old wives' tales that many misinformed developers believe in.
curl | bash is absolutely on my very short list of “things I’ll never do” and I wince when I see it. rm -rf starting from / is another. I watched someone type in (as root) “rm -rf / home/user/folder” once. By the time I realized what had happened it was too late.
Probably entirely AI-generated.
Was this made just cause someone wanted to do something in Rust?
No. The developer actually chose Zig initially.
* https://news.ycombinator.com/item?id=44034961