Need help?
<- Back

Comments (434)

  • nullindividual
    There's a large debate whether a 'hybrid' kernel is an actual thing, and/or whether NT is just a monolithic kernel.The processes section should be expanded upon. The NT kernel doesn't execute processes, it executes _threads_. Threads can be created in a few milliseconds where as noted, processes are heavy weight; essentially the opposite of Unicies. This is a big distinction.io_uring would be the first true non-blocking async I/O implementation on Unicies.It should also be noted that while NT as a product is much newer than Unicies, it's history is rooted in VMS fundamentals thanks to it's core team of ex-Digital devs lead by David Cutler. This pulls back that 'feature history' by a decade or more. Still not as old as UNIX, but "old enough", one could argue.[0] https://stackoverflow.com/questions/8768083/difference-betwe...
  • phendrenad2
    One thing I don't see mentioned in this article, and I consider to be the #1 difference between NT and Unix, is the approach to drivers.It seems like NT was designed to fix a lot of the problems with drivers in Windows 3.x/95/98. Drivers in those OSs came from 3rd party vendors and couldn't be trusted to not crash the system. So ample facilities were created to help the user out, such as "Safe Mode" , fallback drivers, and a graphics driver interface that disables itself if it crashes too many times (yes really).Compare that to any Unix. Historic, AT&T Unix, Solaris, Linux, BSD 4.x, Net/Free/OpenBSD, any research Unix being taught at universities, or any of the new crop of Unix-likes such as Redox. Universally, the philosophy there is that drivers are high-reliability components vetted and likely written by the kernel devs.(Windows also has a stable driver API and I have yet to see a Unix with that, but that's another tangent)
  • kev009
    This is great! It would be interesting to see darwin/macos in the mix.On the philosophical side, one thing to consider is that NT is in effect a third system and therefore avoided some of the proverbial second system syndrome.. Cutler had been instrumental in building at least two prior operating systems (including the anti-UNIX.. VMS) and Microsoft was keen to divorce itself from OS/2.With the benefit of hindsight and to clear some misconceptions, OS/2 was actually a nice system but was somewhat doomed both technically and organizationally. Technically, it solved the wrong problem.. it occupies a basically unwanted niche above DOS and below multiuser systems like UNIX and NT.. the same niche that BeOS and classic Mac OS occupied. Organizationally/politically, for a short period it /was/ a "better DOS than DOS and better Windows than Windows" with VM86 and Win API support, but as soon as Microsoft reclaimed their clown car of APIs and apps it would forever be playing second fiddle and IBM management never acknowledged this reality. And that compatibility problem was still a hard one for Microsoft to deal with, remember that NT was not ubiquitous until Windows XP despite being a massive improvement.
  • sebstefan
    >The C language: One thing Unix systems like FreeBSD and NetBSD have fantasized about for a while is coming up with their own dialect of C to implement the kernel in a safer manner. This has never gone anywhere except, maybe, for Linux relying on GCC-only extensions. Microsoft, on the other hand, had the privilege of owning a C compiler, so they did do this with NT, which is written in Microsoft C. As an example, NT relies on Structured Exception Handling (SEH), a feature that adds try/except clauses to handle software and hardware exceptions. I wouldn’t say this is a big plus, but it’s indeed a difference.Welp, that's an unfortunate use of that capability given what we see today in language development when it comes to secondary control flows.
  • phibz
    The article hit on some great high points of difference. But I feel like it misses Cutler and team's history with OpenVMS and MICA. The hallmarks of their design are all over NT. With that context it reads less like NT was avoiding UNIX's mistakes and more like it was built on years of learning from the various DEC offerings.
  • Const-me
    I would add that on modern WinNT, Direct3D is an essential part of the kernel, see dxgkrnl.sys. This means D3D11 is guaranteed to be available. This is true even without any GPU, Windows comes with a software fallback called WARP.This allows user-space processes to easily manipulate GPU resources, share them between processes if they want, and powers higher level technologies like Direct2D and Media Foundation.Linux doesn’t have a good equivalent for these. Technically Linux has dma-buf subsystem which allows to share stuff between processes. Unfortunately, that thing is much harder to use than D3D, and very specific to particular drivers who export these buffers.
  • andrewla
    Practically speaking there are a number of developer-facing concerns that are pretty noticeable. I'm primarily a Linux user but I worked in Windows for a long time and have grown to appreciate some of the differences.For example, the way that command line and globbing works are night and day, and in my mind the Windows approach is far superior. The fact that the shell is expected to do the globbing means that you can really only have one parameter that can expand. Whereas Win32 offers a FindFirstFile/FindNextFile interface that lets command line parameters be expanded at runtime. A missing parameter in unix can cause crazy behavior -- "cp *" but on windows this can just be an error.On the other hand, the Win32 insistence on wchar_t is a disaster. UTF-16 is ... just awful. The Win32 approach only works if you assume 64k unicode characters; beyond that things go to shit very quickly.
  • nyrikki
    There are a number of issues, like ignoring the role of VMS, that windows 3.1 had a registry, the performance problems of early NT that lead to the hybrid model, the hype of microkernels at the time, the influence of plan 9 on both etc...Cutler knew about microkernels from his work at Digital, OS/2 was a hybrid kernel, and NT was really a rewrite after that failed partnership.The directory support was targeting Netware etc...
  • nmz
    > Linux’s io_uring is a relatively recent addition that improves asynchronous I/O, but it has been a significant source of security vulnerabilities and is not in widespread use.Funny, opening the manpage of aio on freebsd you get this on the second paragraph> Asynchronous I/O operations on some file descriptor types may block an > AIO daemon indefinitely resulting in process and/or system hangs. > Operations on these file descriptor types are considered “unsafe” and > disabled by default. They can be enabled by setting the > vfs.aio.enable_unsafe sysctl node to a non-zero value.So nothing is safe.
  • ssrc
    I remember reading those books (ok, it was the 4.3 BSD edition instead of 4.4) alongside Bach's "The Design of the Unix Operating System" and Uresh Vahalia's "UNIX internals: the new frontiers" (1996). I recommend "UNIX internals". It's very good and not as well known as the others.
  • ThinkBeat
    Architecturally WindowsNT was a much better designed system than Linux when it came out.I wish it had branched to one platform as a workstation OS (NTNext).and one that gets dumber and worse in order to make it work well for gaming. Windows 7/8/10/11 etc,Technically one would hope that NTNext would be Windows Server, but sadly no.I remember installing WindowsNT on my PC in awe how much better it was than Dos/Windows3 and later 95.And compatibility back then was not great. There was a lot that didn't work, and I was more than fine with that.It could run win32, os/2, and posix and it could be extended to run other systems in the same way.Posix was added as a nessecity to bid for huge software contracts from the US government, and MS lost a huge contract and lost interest in the POSIX sub system, and in the os/2 sub system.Did away with it, until they re-invented a worse system for WSL.Note subsystem in WindowsNT means something very different than subsystem for Linux.
  • bawolff
    > Internationalization: Microsoft, being the large company that was already shipping Windows 3.x across the world, understood that localization was important and made NT support such feature from the very beginning. Contrast this to Unix where UTF support didn’t start to show up until the late 1990sI feel like this is a point for unix. Unix being late to the unicode party means utf-8 was adopted where windows was saddled with utf-16---The NT kernel does seem to have some elegance. Its too bad it is not open source; windows with a different userspace and desktop environment would be interesting.
  • ExoticPearTree
    I have been using Windows and Linux for about 20+ years. Still remember the Linux Kernel 2.0 days and Windows NT 4. And I have to admit that I am more familiar with the Linux kernel than the Windows one.Now, that being said, I think the Windows kernel sounded better on paper, but in reality Windows was never as stable as Linux (even in it's early days) doing everyday tasks (file sharing, web/mail/dns server etc.).Even to this day, the Unix philosophy of doing one thing and doing it well stands: maybe the Linux kernel wasn't as fancy as the Windows one, but it did what it was supposed to do very well.
  • rbanffy
    I don’t think the registry is a good idea. I don’t mind every program having its own dialect of a configuration language, all under the /etc tree. If you think about it, the /etc tree is just a hierarchical key-value store implemented as files where the persistent format of the leaf node is left to its implementer to decide.
  • rkagerer
    Can we talk about NT picoprocesses?Up until this feature was added, processes in NT were quite heavyweight: new processes would get a bunch of the NT runtime libraries mapped in their address space at startup time. In a picoprocess, the process has minimal ties to the Windows architecture, and this is used to implement Linux-compatible processes in WSL 1.They sound like an extremely useful construct.Also, WSL 2 always felt like a "cheat" to me... is anyone else disappointed they went full-on VM and abandoned the original approach? Did small file performance ever get adequately addressed?
  • jart
    > Unified event handling: All object types have a signaled state, whose semantics are specific to each object type. For example, a process object enters the signaled state when the process exits, and a file handle object enters the signaled state when an I/O request completes. This makes it trivial to write event-driven code (ehem, async code) in userspace, as a single wait-style system call can await for a group of objects to change their state—no matter what type they are. Try to wait for I/O and process completion on a Unix system at once; it’s painful.Hahaha. Try polling stdin, a pipe, an ipv4 socket, and an ipv6 socket at the same time.
  • anon
    undefined
  • PaulHoule
    (1) The has been some convergence such as FUSE in Linux which lets you implement file systems in user space, Proton emulates NT very well, and(2) Win NT’s approach to file systems makes many file system operations very slow which makes npm and other dev systems designed for the Unix system terribly slow in NT. Which is why Microsoft gave up on the otherwise excellent WSL1. If you were writing this kind of thing natively for Windows you would stuff blobs into SQLLite (e.g. a true “user space filesystem”) or ZIP files or some other container instead of stuffing 100,000 files in directories.
  • walki
    I feel like the NT kernel is in maintenance only mode and will eventually be replaced by the Linux kernel. I submitted a Windows kernel bug to Microsoft a few years ago and even though they acknowledged the bug the issue was closed as a "won't fix" because fixing the bug would require making backwards incompatible changes.Windows currently has a significant scaling issue because of its Processor Groups design, it is actually more of an ugly hack that was added to Windows 7 to support more than 64 threads. Everyone makes bad decisions when developing a kernel, the difference between the Windows NT kernel and the Linux kernel is that fundamental design flaws tend to get eventually fixed in the Linux kernel while they rarely get fixed in the Windows NT kernel.
  • userbinator
    Controversial opinion: DOS-based Windows/386 family (including 3.11 enhanced mode, 95, 98, up to the ill-fated ME) are even more advanced than NT. While Unix and NT, despite how different they are in the details are still "traditional" OSes, the lineage that started at Windows/386 are hypervisors that run VMs under hardware-assisted virtualisation. IMHO not enough has been written about the details of this architecture, compared to Unix and NT. It's a hypervisor that passes through most accesses to the hardware by default, which gave it a bad reputation for stability and security, but also a great reputation for performance and efficiency.
  • delta_p_delta_x
    NT is why I like Windows so much and can't stand Unix-likes. It is object-oriented from top to bottom, and I'm glad in the 21st century PowerShell has continued that legacy.But as someone who's used all versions of Windows since 95, this paragraph strikes me the most:> What I find disappointing is that, even though NT has all these solid design principles in place… bloat in the UI doesn’t let the design shine through. The sluggishness of the OS even on super-powerful machines is painful to witness and might even lead to the demise of this OS.I couldn't agree more. Windows 11 is irritatingly macOS-like and for some reason has animations that make it appear slow as molasses. What I really want is a Windows 2000-esque UI with dense, desktop-focused UIs (for an example, look at Visual Studio 2022 which is the last bastion of the late 1990s-early 2000s-style fan-out toolbar design that still persists in Microsoft's products).I want modern technologies from Windows 10 and 11 like UTF-8, SSD management, ClearType and high-quality typefaces, proper HiDPI scaling (something that took desktop Linux until this year to properly handle, and something that macOS doesn't actually do correctly despite appearing to do so), Windows 11's window management, and a deeper integration of .NET with Windows.I'd like Microsoft to backport all that to the combined UI of Windows 2000 and Windows 7 (so probably Windows 7 with the 'Classic' theme). I couldn't care less about transparent menu bars. I don't want iOS-style 'switches'. I want clear tabs, radio buttons, checkboxes, and a slate-grey design that is so straightforward that it could be software-rasterised at 4K resolution, 144 frames per second without hiccups. I want the Windows 7-style control panel back.
  • desdenova
    Unfortunately NT doesn't have any usable distribution, so it's still a theoretical OS design.
  • anon
    undefined
  • AdeptusAquinas
    Could someone explain the 'sluggish ui responsiveness' talked about in the conclusion? I've never experienced it in 11, 10, 8 or 7 etc - but maybe thats because my windows machines are always gaming machines and have a contemporary powerful graphics card. I've used a mac pro for work a couple of times and never noticed that being snippier than my home machine.
  • IgorPartola
    What I don’t see in the comments here is the argument I remember hearing in the late 90s and early 2000s: that Unix is simpler than Windows. I certainly feel like it was easier for me to grasp the POSIX API compared to what Windows was doing at the time.
  • Circlecrypto2
    A great article that taught me a lot of history. As a long time Linux user and advocate for that history, I learned there is actually a lot to appreciate from the work that went into NT.
  • MaxGripe
    The sluggishness of the system on new hardware is an accurate observation, but I think the author should also take a look at macOS or popular Linux distros, where it's significantly worse
  • stevekemp
    I wonder how many of the features which other operating systems got much later, such as the unified buffer cache, were due to worries of software patents?
  • lukeh
    No one was using X.500 for user accounts on Solaris, until LDAP and RFC 2307 came along. And at that point hardly anyone was using X.500. A bit more research would have mentioned NIS.
  • parl_match
    > Unix’s history is long—much longer than NT’sFun fact: NT is a spiritual (and in some cases, literal) successor of VMS, which itself is a direct descendant of the RSX family of operating systems, which are themselves a descendant of a process control family of task runners from 1960. Unix goes back to 1964 - Multics.Although yeah, Unix definitely has a much longer unbroken chain.
  • virgulino
    Inside Windows NT, the 1st edition, by Helen Custer, is one of my all-time favorite books! A forgotten gem. It's good to see it being praised.
  • jhallenworld
    NT has its object manager.. the problem with it is visibility. Yes, object manager type functionality was bolted-on to UNIX, but at least it's all visible in the filesystem. In NT, you need a special utility WinObj to browse it.
  • p0seidon
    This is such a well-written read, just so insightful and broad in its knowledge. I learned a lot, thank you (loved NT at that time - now I know why).
  • chriscappuccio
    Windows was great at running word processors. BSD and Linux were great as internet scale servers. It was until Microsoft tried running Hotmail on NT that they had any idea there was a problem. Microsoft used this experience to fix problms that ultimately made Windows better for all users across many use cases.All the talk here about how Windows had a better architecture into he beginning conveniently avoids the fact that windows was well known for being over-designed while delivering much less than its counterparts in the networking arena for a long time.Its not wrong to admire what windows got right, but Unix got so much right by putting attention where it was needed.
  • jonathanyc
    Great article, especially loved the focus on history! I’ve subscribed.> Lastly, as much as we like to bash Windows for security problems, NT started with an advanced security design for early Internet standards given that the system works, basically, as a capability-based system.I’m curious as to why the NT kernel’s security guarantees don’t seem to result in Windows itself being more secure. I’ve heard lots of opinions but none from a comparative perspective looking at the NT vs. UNIX kernels.
  • davidczech
    Neo
  • chasil
    I would imagine that Windows was closer to UNIX than VMS was, and there were several POSIX ports to VMS.VMS POSIX ports:https://en.m.wikipedia.org/wiki/OpenVMS#POSIX_compatibilityVMS influence on Windows:https://en.m.wikipedia.org/wiki/Windows_NT#Development"Microsoft hired a group of developers from Digital Equipment Corporation led by Dave Cutler to build Windows NT, and many elements of the design reflect earlier DEC experience with Cutler's VMS, VAXELN and RSX-11, but also an unreleased object-based operating system developed by Cutler at Digital codenamed MICA."Windows POSIX layer:https://en.m.wikipedia.org/wiki/Microsoft_POSIX_subsystemXenix:https://en.m.wikipedia.org/wiki/Xenix"Tandy more than doubled the Xenix installed base when it made TRS-Xenix the default operating system for its TRS-80 Model 16 68000-based computer in early 1983, and was the largest Unix vendor in 1984."EDIT: AT&T first had an SMP-capable UNIX in 1977."Any configuration supplied by Sperry, including multiprocessor ones, can run the UNIX system."https://www.bell-labs.com/usr/dmr/www/otherports/newp.pdfUNIX did not originally use an MMU:"Back around 1970-71, Unix on the PDP-11/20 ran on hardware that not only did not support virtual memory, but didn't support any kind of hardware memory mapping or protection, for example against writing over the kernel. This was a pain, because we were using the machine for multiple users. When anyone was working on a program, it was considered a courtesy to yell "A.OUT?" before trying it, to warn others to save whatever they were editing."https://www.bell-labs.com/usr/dmr/www/odd.htmlShared memory was "bolted on" with Columbus UNIX:https://en.m.wikipedia.org/wiki/CB_UNIX...POSIX implements setfacl.
  • runjake
    The NT kernel is pretty nifty, albeit an aging design.My issue with Windows as an OS, is that there's so much cruft, often adopted from Microsoft's older OSes, stacked on top of the NT kernel effectively circumventing it's design.You frequently see examples of this in vulnerability write-ups: "NT has mechanisms in place to secure $thing, but unfortunately, this upper level component effectively bypasses those protections".I know Microsoft would like to, if they considered it "possible", but they really need to move away from the Win32 and MS-DOS paradigms and rethink a more native OS design based solely on NT and evolving principles.
  • fargle
    this is a lovely and well written article, but i have to quibble with the conclusion. i agree that "it’s not clear to me that NT is truly more advanced". i also agree with the statement "It is true that NT had more solid design principles at the onset and more features that its contemporary operating systems"but i don't agree with is that it was ever more advanced or "better" (in some hypothetical single-dimensional metric). the problem is that all that high minded architectural art gets in the way of practical things: - performance, project: (m$ shipping product, maintenance, adding features, designs, agility, fixing bugs) - performance, execution (anyone's code running fast) - performance, market (users adopting it, building new unpredictable things) it's like minix vs. linux again. sure minux was at the time in all theoretical ways superior to the massive hack of linux. except that, of course, in practice theory is not the same as practice.in the mid 2000-2010s my workplace had a source license for the entire Windows codebase (view only). when the API docs and the KB articles don't explain it, we could dive deeper. i have to say i was blown away and very surprised by "NT" - given it's abysmal reliability i was expecting MS-DOS/Win 3.x level hackery everywhere. instead i got a good idea of Dave Cutler and VMS - it was positively uniformly solid, pedestrian, uniform and explicit. to a highly disgusting degree: 20-30 lines of code to call a function to create something that would be 1-2 lines of code in a UNIX (sure we cheat and overload the return with error codes and status and successful object id being returned - i mean they shouldn't overlap, right? probably? yolo!).in NT you create a structure containing the options, maybe call a helper function to default that option structure, call the actual function, if it fails because of limits, it reports how much you need then you go back and re-allocate what you need and call it again. if you need the new API, you call someReallyLongFunctionEx, making sure to remember to set the version flag in the options struct to the correct size of the new updated option version. nobody is sure what happens if getSomeMinorObjectEx() takes a getSomeMinorObjectParamEx option strucure that is the same size as the original getSomeMinorObjectParam struct but it would probably involve calling setSomeMinorObjectParamExParamVersion() or getObjectParamStructVersionManager()->SelectVersionEx(versionSelectParameterEx). every one is slightly different, but they are all the same vibe.if NT was actual architecture, it would definitely be "brutalist" [1]the core of NT is the antithesis of the New Jersey (Berkeley/BSD) [2] style.the problem is that all companies, both micro$oft and commercial companies trying to use it, have finite resources. the high-architect brutalist style works for VMS and NT, but only at extreme cost. the fact that it's tricky to get signals right doesn't slow most UNIX developers down, most of the time, except for when it does. and when it does, a buggy, but 80%, solution is but a wrong stackoverlflow answer away. the fact that creating a single object takes a page of code and doing anything real takes an architecture committee and a half-dozen objects that each take a page of (very boring) code, does slow everyone down, all the time.it's clear to me, just reading the code, that the MBA's running micro$oft eventually figured that out and decided, outside the really core kernel, not to adopt either the MIT/Stanford or the New Jersey/Berkeley style - instead they would go with "offshore low bidder" style for the rest of whatever else was bolted on since 1995. dave cutler probably now spends the rest of his life really irritated whenever his laptop keeps crashing because of this crap. it's not even good crap code. it's absolutely terrible; the contrast is striking.then another lesson (pay attention systemd people), is that buggy, over-complicated, user mode stuff and ancillary services like control-panel, gui, update system, etc. can sink even the best most reliable kernel.then you get to sockets, and realize that the internet was a "BIG DEAL" in the 1990s.ooof, microsoft. winsock.then you have the other, other, really giant failure. openness. open to share the actual code with the users is #1. #2 is letting them show the way and contribute. the micro$oft way was violent hatred to both ideas. oh, well. you could still be a commercial company that owns the copyright and not hide the, good or bad, code from your developers. MBAAs (MBA Assholes) strike again.[1] https://en.wikipedia.org/wiki/Brutalist_architecture [2] https://en.wikipedia.org/wiki/Worse_is_better
  • teleforce
    >As a result, NT did support both Internet protocols and the traditional LAN protocols used in pre-existing Microsoft environments, which put it ahead of Unix in corporate environments.Fun facts, to accelerate networking and Internet capability of Windows NT due to the complexity of coding a compatible TCP/IP implementation, the developers just took the entire TCP/IP stacks from BSD, and call it a day since BSD license does allows for that.They cannot do that with Linux since it's GPL licensed codes, and this probably when the FUD attack started for Linux by Microsoft and the culmination of that is "Linux is cancer" statement by the ex-CEO Ballmer.The irony is that Linux now is the most used OS in Azure cloud platform (where Butler was the chief designer) not BSDs [2].[1] History of the Berkeley Software Distribution:https://en.wikipedia.org/wiki/History_of_the_Berkeley_Softwa...[2] Microsoft: Linux Is the Top Operating System on Azure Today:https://news.ycombinator.com/item?id=41037380