Exploring the Future of Computing (original) (raw)

Notice

Just want to let y’all know that my family and I have been hit hard with bronchitis these past two weeks, and especially my recovery is going quite slowly (our kids are healthy again, and my wife is recovering quite well!). As such, I haven’t been able to do much OSNews work.

I hope things will finally clear up a bit over the weekend so that I can resume normal service come Monday. Enjoy your weekend, y’all!

Eliminating memory safety vulnerabilities at the source

The push towards memory safe programming languages is strong, and for good reason. However, especially for bigger projects with a lot of code that potentially needs to be rewritten or replaced, you might question if all the effort is even worth it, particularly if all the main contributors would also need to be retrained. Well, it turns out that merely just focusing on writing new code in a memory safe language will drastically reduce the number of memory safety issues in a project as a whole.

Memory safety vulnerabilities remain a pervasive threat to software security. At Google, we believe the path to eliminating this class of vulnerabilities at scale and building high-assurance software lies in Safe Coding, a secure-by-design approach that prioritizes transitioning to memory-safe languages.

This post demonstrates why focusing on Safe Coding for new code quickly and counterintuitively reduces the overall security risk of a codebase, finally breaking through the stubbornly high plateau of memory safety vulnerabilities and starting an exponential decline, all while being scalable and cost-effective.

↫ Jeff Vander Stoep and Alex Rebert at the Google Security Blog

In this blog post, Google highlights that even if you only write new code in a memory-safe language, while only applying bug fixes to old code, the number of memory safety issues will decreases rapidly, even when the total amount of code written in unsafe languages increases. This is because vulnerabilities decay exponentially – in other words, the older the code, the fewer vulnerabilities it’ll have.

In Android, for instance, using this approach, the percentage of memory safety vulnerabilities dropped from 76% to 24% over 6 years, which is a great result and something quite tangible.

Despite the majority of code still being unsafe (but, crucially, getting progressively older), we’re seeing a large and continued decline in memory safety vulnerabilities. The results align with what we simulated above, and are even better, potentially as a result of our parallel efforts to improve the safety of our memory unsafe code. We first reported this decline in 2022, and we continue to see the total number of memory safety vulnerabilities dropping.

↫ Jeff Vander Stoep and Alex Rebert at the Google Security Blog

What this shows is that a large project, like, say, the Linux kernel, for no particular reason whatsoever, doesn’t need to replace all of its code with, say, Rust, again, for no particular reason whatsoever, to reap the benefits of a modern, memory-safe language. Even by focusing on memory-safe languages only for new code, you will still exponentially reduce the number of memory safety vulnerabilities. This is not a new discovery, as it’s something observed and confirmed many times before, and it makes intuitive sense, too; older code has had more time to mature.

What happened to the Japanese PC platforms?

The other day a friend asked me a pretty interesting question: what happened to all those companies who made those Japanese computer platforms that were never released outside Japan? I thought it’d be worth expanding that answer into a full-size post.

↫ Misty De Meo

Japan had a number of computer makers that sold platforms that looked and felt like western PCs, but were actually quite different hardware-wise, and incompatible with the IBM PC. None of these exist anymore today, and the reason is simple: Windows 95. The Japanese platforms compatible enough with the IBM PC that they could get a Windows 95 port turned into a commodity with little to distinguish them from regular IBM PCs, and the odd platform that didn’t use an x86 chip at all – like the X68000 – didn’t get a Windows port and thus just died off.

The one platform mentioned in this article that I had never heard of was FM Towns, made by Fujitsu, which had its own graphical operating system called Towns OS. The FM Towns machines and the Towns OS were notable and unique at the time in that it was the first operating system to boot from CD-ROM, and it just so happens that Joe Groff published an article earlier this year detailing this boot process, including a custom bootable image he made.

Here in the west we mostly tend to remember the PC-98 and X86000 platforms for their gaming catalogs and stunning designs, but that’s like only remembering the IBM PC for its own gaming catalog. These machines weren’t just glorified game consoles – they were full-fledged desktop computers used for the same boring work stuff we used the IBM PC for, and it truly makes me sad I don’t speak a single character of Japanese, so a unique operating system like Towns OS will always remain a curiosity for me.

OpenBSD now enforcing no invalid NUL characters in shell scripts

Our favorite operating system is now changing the default shell (ksh) to enforce not allowing invalid NUL characters in input that will be parsed as parts of the script.

↫ Undeadly.org

As someone who doesn’t deal with stuff like this – I rarely actively use shell scripts – it seems kind of insane to me that this wasn’t the norm since the beginning.

Microsoft deprecates Windows Server Update Services, suggests cloud services instead

As part of our vision for simplified Windows management from the cloud, Microsoft has announced deprecation of Windows Server Update Services (WSUS). Specifically, this means that we are no longer investing in new capabilities, nor are we accepting new feature requests for WSUS. However, we are preserving current functionality and will continue to publish updates through the WSUS channel. We will also support any content already published through the WSUS channel.

↫ Nir Froimovici

What an odd feature to deprecate. Anyone with a large enough fleet of machines probably makes use of Windows Server Update Services, as it adds some much-needed centralised control to the downloading and deployment of Windows updates, so you can do localised partial rollouts for testing, which, as the CrowdStrike debacle showed us once more, is quite important. WSUS also happens to be a local tool, that is set up and run locally, instead of in the cloud, and that’s where we get to the real reason WSUS is being deprecated.

Microsoft is advising IT managers who use WSUS to switch to Microsoft’s alternatives, like Windows Autopatch, Microsoft Intune, and Azure Update Manager. These all happen to run in the cloud, giving up that control WSUS provided by running locally, and they’re not free either – they’re subscription services, of course. I mean, technically WSUS isn’t free either as it’s part of Windows Server, but these cloud services come on top of the cost of Windows Server itself.

Nobody escapes the relentless march of subscription costs.

Disable Sequoia’s monthly screen recording permission prompt

The widelyreported “_foo is requesting to bypass the system private window picker and directly access your screen and audio_” prompt in Sequoia (which Apple has moved from daily to weekly to now monthly) can be disabled by quitting the app, setting the system date far into the future, opening and using the affected app to trigger the nag, clicking “Allow For One Month”, then restoring the correct date.

↫ tinyapps.org blog

Or, and this is a bit of a radical idea, you could use an operating system that doesn’t infantalise its users.

Qualcomm wants to buy Intel

On Friday afternoon, The Wall Street Journal reported Intel had been approached by fellow chip giant Qualcomm about a possible takeover. While any deal is described as “far from certain,” according to the paper’s unnamed sources, it would represent a tremendous fall for a company that had been the most valuable chip company in the world, based largely on its x86 processor technology that for years had triumphed over Qualcomm’s Arm chips outside of the phone space.

↫ Richard Lawler and Sean Hollister at The Verge

Either Qualcomm is only interested in buying certain parts of Intel’s business, or we’re dealing with someone trying to mess with stock prices for personal gain. The idea of Qualcomm acquiring Intel seems entirely outlandish to me, and that’s not even taking into account that regulators will probably have a thing or two to say about this. The one thing such a crazy deal would have going for it is that it would create a pretty strong and powerful all-American chip giant, which is a PR avenue the companies might explore if this is really serious.

One of the most valuable assets Intel has is the x86 architecture and the associated patents and licensing deals, and the immense market power that comes with those. Perhaps Qualcomm is interested in designing x86 chips, or, more likely, perhaps they’re interested in all that sweet, sweet licensing money they could extract by allowing more companies to design and sell x86 processors. The x86 market currently consists almost exclusively of Intel and AMD, a situation which may be leaving a lot of licensing money on the table.

Pondering aside, I highly doubt this is anything other than an overblown, misinterpreted story.

Slowly booting full Linux on the Intel 4004 for fun, art, and absolutely no profit

Can you run Linux on the Intel 4004, the first commercially produced microprocessor, released to the world in 1971? Well, Dmitry Grinberg, the genius engineer who got Linux to run on all kinds of incredibly underpowered hardware, sought to answer this very important question. In short, yes, you can run Linux on the 4004, but much as with other extremely limited and barebones chips, you have to get… Creative. Very creative.

Of course, Linux cannot and will not boot on a 4004 directly. There is no C compiler targeting the 4004, nor could one be created due to the limitations of the architecture. The amount of ROM and RAM that is addressable is also simply too low. So, same as before, I would have to resort to emulation. My initial goal was to fit into 4KB of code, as that is what an unmodified unassisted 4004 can address. 4KB of code is not much at all to emulate a complete system. After studying the options, it became clear that MIPS R3000 would be the winner here. Every other architecture I considered would be harder to emulate in some way. Some architectures had arbitrarily-shifted operands all the time (ARM), some have shitty addressing modes necessitating that they would be slow (RISCV), some would need more than 4KB to even decode instructions (x86), and some were just too complex to emulate in so little space (PPC). … so … MIPS again… OK!

↫ Dmitry Grinberg

This is just one very small aspect of this massive undertaking, and the article and videos accompanying his success are incredibly detailed and definitely not for the faint of heart. The amount of skill, knowledge, creativity, and persistence on display here is stunning, and many of us can only dream of being able to do stuff like this. I absolutely love it.

Of course, the Linux kernel had to be slimmed down considerably, as a lot of stuff currently in the kernel are of absolutely no use on such an old system. Boot time is measured in days, still, but it helped a lot. Grinberg also turned the whole setup into what is effectively an art piece you can hang on the wall, where you can have it run and, well, do things – not much, of course, but he did include a small program that draws mandelbrot set on the VFD and serial port, which is a neat trick.

He plans on offering the whole thing as a kit, but a lot of it depends on getting enough of the old chips to offer a complete, ready-to-assemble kit in the first place.

Why Apple uses JPEG XL in the iPhone 16 and what it means for your photos

The iPhone 16 family has arrived and includes many new features, some of which Apple has played very close to its vest. One such improvement is the inclusion of JPEG XL file types, which promise improved image quality compared to standard JPEG files while delivering relatively smaller file sizes.

[…]

Overall, JPEG XL addresses many of JPEG’s shortcomings. The 30-year-old format is not very efficient, only offers eight-bit color depth, doesn’t support HDR, doesn’t do alpha transparency, doesn’t support animations, doesn’t support multiple layers, includes compression artifacts, and exhibits banding and visual noise. JPEG XL tackles these issues, and unlike WebP and AVIF formats, which each have some noteworthy benefits too, JPEG XL has been built from the ground up with still images in mind.

↫ Jeremy Gray at PetaPixel

Excellent news, and it will hopefully mean others will follow – something that tends to happen when Apple finally supports to the new thing.

Nintendo and The Pokémon Company file patent lawsuit against maker of hit game Palworld

Nintendo, together with The Pokémon Company, filed a patent infringement lawsuit in the Tokyo District Court against Pocketpair, Inc. on September 18, 2024.

This lawsuit seeks an injunction against infringement and compensation for damages on the grounds that Palworld, a game developed and released by the Defendant, infringes multiple patent rights.

↫ Nintendo press release

Since the release of Palworld, which bears a striking resemblance to the Pokémon franchise, everybody’s been kind of expecting a reaction from both Nintendo and The Pokémon Company, and here it is. What’s odd is that it’s not a trademark, trade dress, or copyright lawsuit, but a patent one, which is not what you’d expect when looking at how similar the Palworld creatures look to Pokémon, to the point where some people even suggest the 3D models were simply lifted wholesale from the latest Nintendo Switch Pokémon games.

There’s no mention of which patents Pocketpair supposedly infringes upon, and in a statement, the company claims it, too, has no idea which patents are supposedly in play. I have to admit I never even stopped to think game patents were a thing at all, but now that I spent more than 2 seconds pondering this concept, of course they exist.

This lawsuit will be quite interesting to follow, because the games industry is one of the few technology sectors out there where copying each others ideas, concepts, mechanics, and styles is not only normal, it’s entirely expected and encouraged. New ideas spread through the games industry like wildfires, and if some new mechanic is a hit with players, it’ll be integrated into other games within a few months, and games coming out a year later are expected to have the hit new mechanics from last year.

It’s a great example of how beneficial it is to have ideas freely spread, and how awesome it is to see great games take existing mechanics and apply interesting twists, or use them in entirely different genres than where they originated from. Demon’s Souls and the Dark Souls series are a great example of a series of games that not only established a whole new genre other games quickly capitalised on, but also introduced the gaming world to a whole slew of new and unique mechanics that are now being applied in all kinds of new and interesting ways.

Lawsuits like this one definitely pose a threat to this, so I hope that either this fails spectacularly in court, or that the patents in question are so weirdly specific as to be utterly without merit in going after any other game.

DirectX adopting SPIR-V as the interchange format of the future

As we look to the future, maintaining a proprietary IR format (even one based on an open-source project) is counter to our commitments to open technologies, so Shader Model 7.0 will adopt SPIR-V as its interchange format. Over the next few years, we will be working to define a SPIR-V environment for Direct3D, and a set of SPIR-V extensions to support all of Direct3D’s current and future shader programming features through SPIR-V. This will allow developers to take better advantage of existing tools and unify the ecosystem around investing in one IR.

↫ Chris Bieneman and Cassie Hoef at the DirectX Developer Blog

SPIR-V is developed by the Khronos Group and is an “intermediate language for parallel computing and graphics by Khronos Group”. I don’t know what any of this means, but any adoption of Khronos technologies is a good thing, especially by a heavyweight like Microsoft.

European Commission to order Apple to take interoperability measures after company refuses to comply with DMA

The European Commission has taken the next step in forcing Apple to comply with the Digital Markets Act. The EC has started two so-called specification proceedings, in which they can more or less order Apple exactly what it needs to do to comply with the DMA – in this case covering the interoperability obligation set out in Article 6(7) of the DMA. The two proceedings entail the following:

The first proceeding focuses on several iOS connectivity features and functionalities, predominantly used for and by connected devices. Connected devices are a varied, large and commercially important group of products, including smartwatches, headphones and virtual reality headsets. Companies offering these products depend on effective interoperability with smartphones and their operating systems, such as iOS. The Commission intends to specify how Apple will provide effective interoperability with functionalities such as notifications, device pairing and connectivity.

The second proceeding focuses on the process Apple has set up to address interoperability requests submitted by developers and third parties for iOS and IPadOS. It is crucial that the request process is transparent, timely, and fair so that all developers have an effective and predictable path to interoperability and are enabled to innovate.

↫ European Commission press release

It seems the European Commission is running out of patience, and in lieu of waiting on Apple to comply with the DMA on its own, is going to tell Apple exactly what it must do to comply with the interoperability obligation. This means that, once again, Apple’s childish, whiny approach to DMA compliance is backfiring spectacularly, with the company no longer having the opportunity to influence and control its own interoperability measures – the EC is simply going to tell them what they must do.

The EC will complete these proceedings within six months, and will provide Apple with its preliminary findings which will explain what is expected of Apple. These findings will also be made public to invite comments from third parties. The proceedings are unrelated to any fines for non-compliance, which are separate.

GNOME 47 released with accent colours and completely new open/save file dialogs

The GNOME project has released their newest major version, GNOME 47, and while it’s not the most groundbreaking release, there’s still a ton of good stuff in here. Two features really stand our, with the first one being the addition of accent colours. Instead of being locked into the default GNOME blue accent colour, you can now choose between a variety of colours, which is a very welcome addition. I use the accent colour feature on all my computers, and since I run KDE, I also have this nifty KDE feature where it’ll select an accent colour automatically based on your wallpaper.

No, this isn’t a groundbreaking feature, but considering GNOME’s tendency towards not allowing any customisation, this is simply very welcome.

A much more substantial feature comes in the form of brand new open/save file dialogs, and I’m sure even the GNOME developers themselves are collectively sighing in relief about this one. GNOME’s open/save dialogs were so bad they became a meme, and now they’re finally well and truly fixed, thanks to effectively removing the old ones and adding new ones based on the GNOME Files file manager.

GNOME 47 comes with brand new file open and save file dialogs. The new dialogs are a major upgrade compared with the previous versions, and are based on the existing Files app rather than being a separate codebase. This results in the new dialogs having a much more complete set of features compared with the old open and save dialogs. With the new dialogs you can zoom the view, change the sort order in the icon view, rename files and folders, preview files, and more.

↫ GNOME 47 release notes

And yes, this includes thumbnails.

There’s tons more in GNOME 47, like a new design for dialog windows that look and feel more like they belong on a mobile UI, tons of improvements to Files, the Settings application, GNOME Online Accounts, Web, and more. GNOME 47 will make its way to your distribution of choice soon enough, but of course, you can always build and install it yourself if you’re so inclined.

Intel to spin off its chipmaking business

Intel’s woes are far from over. Pat Gelsinger, the company’s CEO, has announced that Intel’s chipmaking business will be spun off and turned into a separate company.

A subsidiary structure will unlock important benefits. It provides our external foundry customers and suppliers with clearer separation and independence from the rest of Intel. Importantly, it also gives us future flexibility to evaluate independent sources of funding and optimize the capital structure of each business to maximize growth and shareholder value creation.

There is no change to our Intel Foundry leadership team, which continues to report to me. We will also establish an operating board that includes independent directors to govern the subsidiary. This supports our continued focus on driving greater transparency, optimization and accountability across the business.

↫ Pat Gelsinger

This is a big move, and illustrated the difficulties Intel is facing. Its foundry business lost $7 billion last year, and it’s cutting 15% of its workforce – 15000 people – indicating it needs to do something to turn the ship around. Intel is also pausing construction on two additional plants in Europe, but will continue its expansion efforts in the United States. Bitter note is that Intel received a massive cash injection from the US Biden administration, yet then proceeds to fire 15000 people.

Socialism for the rich, capitalism for the poor.

FreeBSD 13.4 released

FreeBSD 13.4 has been released. This is already the fifth release in the FreeBSD 13 series, and contains the usual set of security fixes, driver updates, important updated packages, like openssh, LLVM, clang, and so on. If you’re running FreeBSD 13, you already know how to upgrade, and if you want to start using FreeBSD 13, here’s the download page.

Things you really should know about Windows Input, but would rather not

Are you developing a game for Windows, and are you working on input handling?

At first, it could reasonably be assumed that mouse and keyboard should be the simplest parts of this to deal with, but in reality, they are not – at least if we are talking about Windows. In fact, several extremely popular AAA games ship with severe mouse input issues when specific high-end mice are used, and some popular engines have issues that are still extant.

In this article we’ll explore a few reasons why that is the case, and end up with a solution that works but is still unsatisfactory. I assume that there is a whole other level of complexity involved in properly dealing with accessories like steering wheels, flight sticks, and so on in simulators, but so far I never had the pleasure of working on a game that required this, and this article will not cover those types of input devices.

↫ Peter ‘Durante’ Thoman

So, what is the problem? Basically, there are two ways to handle mouse input in Windows: if you use batched raw input processing, which is pretty much a requirement, you need to also choose whether or not to keep legacy input enabled. If you keep it enabled, the legacy input will add so much junk to your message queue it can negatively impact the performance of your game quite harshly. If you disable it, however, something really fun happens: you can no longer move the game window… Because the Windows UI uses legacy input.

Thoman has a solution that he and his company uses, and he considers it an ugly hack, but they just don’t know of a better way to solve this issue. Thoman keeps legacy input enabled, but just limits the number of message queue events per frame that are being processed (they limit it to 5). As far as they can tell, this doesn’t seem to have any negative side effects, but it’s clearly a bit of an ugly hack that shouldn’t be necessary.

I found this a rather interesting niche topic, and I wonder how many people have struggled with this before, and what kind of other solutions exist.

A brief history of QuickTime

We all know about the Desktop Publishing revolution that the first Macs and their PostScript LaserWriter printers brought in the late 1980s, but many have now forgotten the Desktop Video revolution that followed in the next decade. At its heart was support for multimedia in Apple’s QuickTime.

QuickTime isn’t a single piece of software, or even an API in Classic Mac OS, but a whole architecture to support almost any media format you could conceive of. It defines container and file formats for multiple media types, forming the basis for the MPEG-4 standard, extensible encoding and decoding of a wide variety of media using Codecs, and more.

↫ Howard Oakley

As a Windows users before I switched to the Mac somewhere in 2003 or 2004 or so, I mostly associated QuickTime with an annoying piece of crapware I sometimes had to install to watch videos, despite my Windows installation being perfectly capable of playing a whole slew of video codecs just fine. To make matters worse, Apple eventually started forcing Windows users to also install their auto-update tool that ran in the background, which would occasionally just… Install stuff without your permission.

Of course, QuickTime was a whole lot more than that, especially on the Mac, where it was simply a core technology of the Mac operating system and the name of the built-in video player. It also served as underpinnings for a whole slew of related technologies, from movie editors like iMovie to the QuickTime streaming tools included in Mac OS X Server, so odds are that somehow, somewhere, you’ve used QuickTime in your life time.

I’m not entirely ashamed to admit I had to check if QuickTime was still part of macOS today – I haven’t actively used macOS since, I think, the Snow Leopard days in 2009 – but it obviously has been sunset quite a while ago in favour of AVFoundation, which macOS still uses today.

Releasing Windows as open source is the only viable way forward for Microsoft, and it’s going happen

Last week, Julio Merino published an article I wish someone had written ages ago: a fair, unbiased look at the differences between Windows NT in its original form and UNIX roughly at the time of the initial releases of Windows NT. Merino, who has a long career in tech and has made contributions to several operating systems, does a great job cutting through the fanboyism and decades’ worth of conventional wisdom, arriving at the following conclusion that I think many of us here will share even without diving into the great depth of his article.

NT was groundbreaking technology when it launched. As I presented above, many of the features we take for granted today in systems design were present in NT since its inception, whereas almost all other Unix systems had to gain those features slowly over time. As a result, such features don’t always integrate seamlessly with Unix philosophies.

Today, however, it’s not clear to me that NT is truly “more advanced” than, say, Linux or FreeBSD. It is true that NT had more solid design principles at the onset and more features that its contemporary operating systems, but nowadays… the differences are blurry. Yes, NT is advanced, but not significantly more so than modern Unixes.

What I find disappointing is that, even though NT has all these solid design principles in place… bloat in the UI doesn’t let the design shine through. The sluggishness of the OS even on super-powerful machines is painful to witness and might even lead to the demise of this OS.

↫ Julio Merino

You should definitely read the whole thing, and not just the conclusion, as it will give you some great insight into some of the differences between the two approaches, and how the UNIX and Windows NT worlds learned from each other and grew together. It’s well-written, easy to read, and contains a ton of information and details about especially Windows NT most people are probably not aware of.

Reading through the article helped my crystallise a set of thoughts I’ve been having about the future of Windows, and in particular, the future of Windows NT as a short-hand for the kernel, lower-level frameworks, and everything else below the graphical layer. I think there’s a major change coming to Windows NT, something so big and unheard of it’s going to be the most defining moment in Windows NT history since its very first release. There’s a few facts that lie at the root of my conclusion.

First, ever since the very beginning, Windows NT has been developed in roughly the same way: behind closed doors by a group of specialists inside Microsoft, and every now and then we got a massive dump of new code in the form of a major Windows release. It’s only recently that Microsoft has started taking a more rolling release approach to Windows development, with smaller updates peppered throughout the year, with different release branches users can subscribe to.

Second, despite many of us almost equating Microsoft with Windows – or perhaps with Windows and Office – the reality of it is that Windows hasn’t been the primary driver for revenue for Microsoft for a while now. In Microsoft’s fiscal year of 2023, Windows made up just 10% of the company’s total revenue that year, which amounts to 22billionoutofatotalrevenueof22 billion out of a total revenue of 22billionoutofatotalrevenueof211 billion. Azure alone is almost four times as large at 80billion,andevenLinkedIn–yes,LinkedIn–isgoodfor80 billion, and even LinkedIn – yes, LinkedIn – is good for 80billion,andevenLinkedInyes,LinkedInisgoodfor15 billion in revenue, making Windows only about a third more profitable than the most soulless social network in human history.

Third, despite Windows’ decreasing revenue share, the operating system is becoming ever larger in scope. Not only does it need to cover the literally infinite possible combinations of x86 hardware in both the desktop/laptop and server space, it now also needs to cover what is surely going to be a growing market for ARM hardware, starting with laptops, but surely expanding to desktops and servers, too. Microsoft needs to foot the bill for all of this development, and for how much longer can the company justify spending an inordinate amount of money on a massive army of Windows developers, when the revenue they bring in is such a small part of the company, and a part that’s decreasing every year, to boot?

Fourth, the competition Windows faces is surprisingly strong. Not only are macOS, Chrome OS, and even the Linux desktop doing better than ever, mobile computing is also competing with Windows, and that’s a space Microsoft is simply not present in at all. This is especially pressing in the developing world, where often people’s first and only computing experience is mobile – through Android, mostly – and Microsoft and Windows simply don’t play any role.

Given these facts, there’s only one reasonable course of action for Microsoft.

I think the company is going to address all of these issues by releasing large parts of Windows NT as open source. I base this on a gut feeling bourne out of the above facts, and not on any form of insider information, and there is a 99.9% chance that I am wholly, completely, and utterly wrong. Still, deep down, I feel like releasing Windows as open source makes the most sense considering the challenges the operating system and its parent company are facing.

You and I are going to witness Windows NT’s source code being published as open source on GitHub by Microsoft within 5-7 years, accompanied by an open governance model wherein contributions are welcomed and encouraged. Even if such a step will not be taken by Microsoft, I am convinced that, in the future, when today’s employees and executives write and publish their memoirs, it will contain a lot of discourse on the very serious consideration that took place within the company in the past to do so.

You can quote me on this. And then laugh at me when it inevitable turns out I’m wrong.

Apple releases iOS/iPadOS 18, macOS 15, and a ton more

It’s Apple operating system release day, so if you’re in the Apple ecosystem, it’s like Christmas morning, but for your devices. The two major platforms are, of course, iOS/iPadOS 18:

‌iOS 18‌ adds new customization options for the Home Screen, with the option to arrange apps and widgets with open spaces and add new tints to app icons. Control Center has been entirely overhauled with support for multiple pages, third-party controls, and the option to put controls on the Lock Screen and activate them with the Action Button.

↫ Juli Clover at MacRumors

And macOS 15:

‌macOS Sequoia‌ features iPhone Mirroring, which allows you to control and monitor your ‌iPhone‌ right from your Mac. You can use your ‌iPhone‌’s apps and get your ‌iPhone‌’s notifications all while your ‌iPhone‌ is tucked away and locked.

Window tiling has been improved to make it easier to arrange multiple windows on your Mac’s display, and there are new keyboard and menu shortcuts for organizing your windows. In Safari, Highlights will now show you the information you want most from websites, and there’s a new Viewer mode for watching videos without distractions.

↫ Juli Clover at MacRumors

It doesn’t stop there, though, as Apple also released watchOS 11, visionOS 2, tvOS 18, and the most import ant most hotly anticipated out of all of Apple’s platforms, HomePod Software 18. It’s genuinely kind of staggering how Apple manages to update all of these various platforms at the same time, each coming with a ton of new features and bugfixes, and ship them out to consumers – generally without any major issues or showstoppers. Especially in the case of iOS and macOS, that’s definitely a major difference with the Windows and Android worlds, where users are confronted with strict hardware requirements, lack of update availability altogether, or just stick with previous versions because the new versions contain tons of privacy or feature regressions.

Do note that Apple’s AI/ML features announced during WWDC aren’t shipping yet, and that iPhone Mirroring is not available in the EU because someone told Tim Cook “no” and he threw a hissy fit.

Chrome on the Mac uses less battery than Safari

It’s one of the most pervasive common wisdoms shared all over the web, no matter where you go – it’s one of those things everybody seems to universally agree on: Chrome will absolutely devastate your battery life on the Mac, and you should really be using Safari, because Apple’s special integration magic pixie dust sprinkles ensures Safari sips instead of gulps electricity. Whether you read random forum posters, Apple PR spokespeople, or Apple’s own executives on stage during events, this wisdom is hard to escape.

Is it true, though?

Well, Matt Birchler decided to do something entirely revolutionary and entirely unheard of: a benchmark. Back in the olden days of yore, we would run benchmarks to test the claims from companies and their PR departments, and Birchler decided to dust off this old technique and develop a routine to put the Chrome battery claims to the test. After 3 days of continuous testing on a freshly installed 14” MacBook Pro with an M2 Pro processor and 16 GB RAM running the latest stable releases of both browsers, Birchler came to some interesting conclusions.

In my 3-hour tests, Safari consumed 18.67% of my battery each time on average, and Chrome averaged 17.33% battery drain. That works out to about 9% less battery drain from Chrome than Safari. Yes, you read that right, I found Chrome was easier on my battery than Safari.

While I did experience some variability in each 3 hour test run, Chrome came out on top in 5 of the 6 direct comparisons.

↫ Matt Birchler

His methodology seems quite sound and a good representation of what most laptop users will use their browser for: YouTube, social media, a few news websites, and editing a Google Doc, in a 20 minute loop that was repeated for three hours per test. Multiple of these three hour tests were then ran to counter variability. I highly doubt using different websites will radically change the results, but I obviously am curious to see a similar test ran on Windows and Linux, x86 and ARM, for a more complete picture that goes beyond just the Mac.

Conventional wisdom is sometimes wrong, and I think we have a classic case of that here. While there may have been a time in the past where Chrome on the Mac devastated battery life, it seems Chrome and Chromium engineers have closed the gap, and in some cases even beat Safari. Now, this doesn’t mean everybody should rush and switch to Chrome, since there are countless other reasons to use Safari over Chrome other than supposed battery life advantages.

With Apple PR arguing that alternative browser engines should not be allowed on iOS because Chrome would devastate iOS’ battery life, tests like these are more important than ever, and I hope we’re going to see more of them. Tech media always just seems to copy/paste whatever manufacturers and corporations claim without so much as a hint of skepticism, and this benchmark highlights the dangers of doing so, in case you didn’t already know believing corporations was a terribly idea.