So basically to summarize, Google embargoes security patches for four months so OEMs can push out updates more slowly. And if those patches were immediately added to an open source project like GrapheneOS, attackers would gain info on the vulnerabilities before OEMs provide updates (the GrapheneOS project can see the patches, but they can't ship them). But a lot of patches end up being leaked anyway, so the delay ends up being pointless.
The stupidest part is that, according to the thread, OEMs are allowed to provide binary only patches before the embargo ends, making the whole thing nonsensical since it's trivial to figure out the vulnerabilities from the binaries.
Fun fact: Google actually owns the most commonly used tool, BinDiff ;)
In the good old days, there were exploits patched years prior by some OEMs that were never upstreamed even to Google. New rooting apps come out and... just doesn't work. I don't know if that still happens, though.
Not really.. numerous changes are still not a total redesign of whichever subsystem was affected so it's pretty obvious where some small security relevant changes are. A stupid embargo was always enough to ruin security by code analysis for white hats but never enough to stop attacks by code analysis for black hats.
How does this work legally? If Android AOSP is open-source, once one OEM updates, surely the owner gets the legal right to request sources. IIRC the maximum delay is 30 days.
Almost all of AOSP is under the Apache or BSD licenses, not the GPL. Very few GPL components remain (the kernel being the large and obvious one).
So, yes, making a GPL request will work for the very few components still under GPL, if a vendor releases a binary patch. But for most things outside of the kernel, patch diffing comes back into play, just like on every closed-source OS.
The FSF has always been pretty clear on this: you use a linker (static or dynamic) = it applies; you don't = it doesn't. They even wrote LGPL with this distinction in mind, and introduced exceptions to yacc (bison) to accommodate non-free software.
In case of binary releases, you can request the sources of the relevant subcomponent (e.g. the kernel). The component boundaries are pretty clear wrt Linux: Torvalds has made it quite clear early on, that the kernel's GPL2 does not apply to anything in the user space.
Here also, the important distinction between GPL 2 & 3: with GPL3, it would be a breach of the license to ship code on a device that does not allow the end user to update that code. Which has effectively pushed everyone away from GPL3-licensed software.
IMHO the move to GPL3 has likely caused more harm than good to the FOSS ecosystem; in some alternative universe, GPL3 never happened, most of Android's userspace is GPL2, and we get the source for everything. In both universes we still don't get to deploy changes to devices we own, so IMHO the GPL3 won us nothing.
The FSF considers linking to be a definite example of derived works in general, but I don't believe they consider lack of linking to prove that something isn't a derived work.
The goal of the GPL is to flip draconian copyright maximalism on its head, and copyright laws don't talk about linkers so that can't be the deciding factor. Not to mention that it would be trivial to work around linking by creating a stub and calling the GPL code as a subprogram (in kernel contexts a spiritually similar setup is called the "GPL condom" and my impression is that most lawyers not employed by NVIDIA consider this to not be a get-out-of-jail-free card).
> (in kernel contexts a spiritually similar setup is called the "GPL condom" and my impression is that most lawyers not employed by NVIDIA consider this to not be a get-out-of-jail-free card).
The whole thing with Linux's conception is that it's predicated on any and all unlicensed usage of GPL-only interfaces being copyright infringement of other usage in the kernel source. This is an extremely broad claim to make in general (especially in light of Google v. Oracle), and the 'GPL condom' approach is just to further ensure that the unlicensed side is textually unrelated to the kernel. When there's no infringement, the copyright holders can't do a single thing, except to technologically make it harder on you.
Meanwhile, the whole GPL idea of linking vs. statically embedding is only applicable when you're shipping someone else's GPL-licensed code alongside your non-licensed code, in which case you're bound by its terms. If you're not shipping someone else's code, then there's plenty of ways to force a particular build, etc., in the manner that the GPL is trying to prevent. Heaven knows I've likely violated the spirit of the GPL before just through Hyrum's law.
I see what you're getting at, but on the other hand there is also a difference between APIs that are intended for use by third parties that are just "regular usage of the program" and internal functions that are being exposed due to technical factors in how the source code is organised (i.e., the fact that Linux organises its code into loadable modules and does not expose all symbols to try to avoid needless breakages).
To be clear -- the general view is that the GPL is viral in both cases (in fact the general view is that any user of the published interfaces of a GPL-licensed library is a derived work -- even in cases not involving compilation or linking), but I think the kernel module case is even more clear-cut than that.
In my view, the fact that the Linux kernel interfaces change incredibly frequently in every release specifically in response to internal code changes really makes it hard to believe that usage out of tree is just the same as using the syscall interface (which is what NVIDIA et al. tend to argue). (Note that the Linux syscall exception is actually not the license for the entirety of Linux -- almost none of my code contributions have been under the syscall exception and the same is true for almost all Linux contributions.)
For what it's worth, I think the distinction between EXPORT_SYMBOL and EXPORT_SYMBOL_GPL has been a net harm to any discussion about module-related GPL violations, precisely because there isn't an obvious line you can draw between their usage and it just muddies the waters unnecessarily (recent attempts to further lock this down seem to indicate some kernel developers agree that this was a mistake). If you imagine an analogous case with a Python program and someone adding files to it which modify the internal state of the original program through interfaces that were only visible because of technical aspects of code organisation, the case becomes far more clear and I don't think further technical shenanigans solve the underlying legal issue.
Google v. Oracle was also about copying the interface itself and whether replicating said interface was fair use (which I think everyone except Oracle would find to obviously be true, otherwise the entire history of GNU and Linux would be one of copyright infringement). It was not at all concerned with creating combined works through the use of an interface. You could try to make the argument that (in light of Google v. Oracle) that the generally accepted view of users of a GPL-licensed library being derived works of said library is somewhat questionable, but I think that's a separate discussion (as I said, I think the module discussion is even more clear-cut).
The GPL works because you create a derived work. The legal ability to create a derived work is restricted by copyright. They grant you an exception to the default restriction of copyright, but only if you comply with all of these terms.
The most obvious way around it is to claim the thing you created is not a derived work. Nvidia probably has a good case for this because 99% of their code has absolutely nothing to do with the Linux kernel; making an adapter so you can plug that 99% into the kernel does not make it magically derived from it. Some random hardware driver doesn't have that same claim of being mostly independent of the kernel - graphics drivers are particularly complicated.
Regardless of whether the driver is a derivative work of the kernel, the combination of the kernel and the driver is obviously derivative of both its parts and so you have to comply if you ship that, too.
The short answer is copyright law and jurisprudence. The whole purpose of copyleft is to flip draconian copyright regimes over and make them protect users instead, so the GPL generally has the most maximalist stance allowed by copyright law. If copyright law would say that combining or extending software in a particular way is not fair use then the GPL generally would render the combination GPL.
In practice, GPLv2 would not be viral in the way you describe unless you can show that all of Android is a derived work of Linux (not true). GPLv3 would require users be avle to replace components under said license which has an impact on how such an appliance need to work (though the GPLv2 does also have somewhat related text about "the scripts which control installation") but wouldn't expand the scope of code under the license, just the terms.
Sadly, the layperson's and lay developer's interpretation of the GPL has been watered down over the years, and the GPL wasn't maximalist enough to begin with - see AGPL, SSPL for extensions created by people who saw new kinds of linking that didn't appear to be covered by the GPL. Of course big corporations preferably use these new kinds of linking that aren't covered, which is why new licenses are necessary.
I just don’t see the distinction as clearly when it’s a single binary that cannot be decoupled or introspected.
Why is it if I build a static binary with GPL code and distribute it I must open source my changes; but if you do the same as a whole OS it’s not necessary.
Feels like it should all be fine or none of it is fine somehow.
only if someone was to copy the entire script and then put it into their distributed binary; which is what I am saying that the OEMs are doing with Linux.
Either a) when the license has an explicit exemption (such as at glibc or the kernel's userspace interfaces) or b) when something ceases to be a "derivative work" in copyright terms (which is ultimately a legal question for lawyers).
Tangentially, I assumed that the GPL must have some built-in exception for running non-GPL userspace programs on top of a GPLed kernel (similar to the System Library exception). However, it seems like it doesn't, since the Linux kernel has its own exception to allow this: https://spdx.org/licenses/Linux-syscall-note.html.
Note that the Linux syscall exemption is actually not the license of the entirety of Linux, because most code contributed to Linux is under the standard GPLv2. It's just a red herring -- there is no need for such an exemption because the generally held view is that such programs are not derived works of Linux (from a copyright law perspective) in the first place.
Have you ever tried requesting the source code for your phone?
They'll either ignore you, or give you something that is obviously not the source code (e.g. huge missing sections; often they'll only produce kernel code and not even a way to compile it). Law be damned. They don't follow it and nobody is forcing them to
I don't think it is laziness per se. It's a combination of having far too many models (just look at Samsung's line-up, more than ten models per year if we don't count all the F and W variants), using many different SoCs from different vendors (just taking Samsung again as an example, using Qualcomm Snapdragon, Samsung Exynos, Mediatek Helio, Mediatek Dimensity, sometimes even a different chipset for the same phone model per region), each model supported for multiple years now on a monthly or quarterly update schedule (Samsung: recent A5x, Sxx, Sxx FE, Z Flip x, Z Flip 7 FE, Z fold x, Xcover x, etc. are on a monthly schedule). This across a multitude of kernel versions, AOSP versions (for older phones), OneUI versions (for phones that haven't been updated yet to the latest OneUI).
The must have literally over tens of different models to roll out security updates for, with many different SoCs and software versions to target.
And compared to other Android vendors, Samsung is actually pretty fast with updates.
It's true that other manufacturers have smaller line-ups, but they also tend to be smaller companies.
Compare that with Apple: every yearly phone uses the same SoC, only with variations in simpler things like CPU/GPU core counts.
To me this is the ultimate failing of ARM as an ISA, the fact that you even need to consider "targeting" allows a deficient ISA like x86 to still stand head and shoulders above it in terms of OEM support (though perhaps not security)
> It's a combination of having far too many models (just look at Samsung's line-up, more than ten models per year if we don't count all the F and W variants), using many different SoCs from different vendors
> [...]
> This across a multitude of kernel versions, AOSP versions (for older phones), OneUI versions (for phones that haven't been updated yet to the latest OneUI).
Those are choices. If you want to do that, you need a process that can support it.
I suppose it could be that they just don't care and are deliberately screwing their users, but never attribute to malice that which can be explained by incompetence and all that.
>> Those are choices. If you want to do that, you need a process that can support it.
__need__ is doing a lot of work here. There is no forcing function to get OEMs to do this ASAP: 1) the market doesn't really care that much 2) there are no regulations around this (and even if they were, can you immediately recall a tech exec going to jail for breaking the law ... )
This. Pixels are not more expensive than flagship Samsungs. If people cared and bought Pixels because they get the security updates, then Samsung (and the others) would follow. But people don't care, so the OEMs don't do it.
It's kinda weird to single out Samsung here, because they are pretty good with security updates and they explicitly talk about long security periods in their marketing. They are not as fast as Pixel, but somewhere mid-range and up (A5x) get monthly updates and they are usually 1-4 weeks behind Google.
It's the other vendors that are the issue. Even Fairphone is behind a lot (and they only release one model at a time).
The "(and others)" part was about including the other OEMs :-). I used the Samsung flagship as a specific example because it is very expensive, and people who buy it don't have the excuse of the price.
Those are choices. If you want to do that, you need a process that can support it.
I suppose it could be that they just don't care and are deliberately screwing their users, but never attribute to malice that which can be explained by incompetence and all that.
I think for a long time Android users did not really care. Until a few years, Android security support was abysmal with many vendors only doing 1-2 years of updates. Users bought the phones and didn't care, so I guess it was a smart business move to not care.
This changed in recent years due to a mixture of the (then) upcoming EU requirement for supporting devices multiple years with security updates, Apple being able to tout this as an advantage, causing Google and Samsung to enter into a competition to promise the largest number of years of security support, etc.
Welcome to Android. It started out a bit undercooked and Google relied on OEMs to make finished polished products. Then the reality that OEMs suck at software hit them in the face. They spent years acquiring more control of their platform while trying not to piss off Samsung.
Pretty much this... and even then, they still suck hard. Apple was right to start off with as much control over their platform as they did. The only reason I never went with iPhone is it started as an AT&T exclusive, and you couldn't pay me enough to be their customer ever again.
Have you considered the possibility that this may not be motivated by security at all, given the recent spate of similarly illogical and somewhat hostile decisions?
> Android security patches are almost entirely quarterly instead of monthly to make it easier for OEMs. They're giving OEMs 3-4 months of early access which we know for a fact is being widely leaked including to attackers.
Android is is over 15 years old and Google still hasn't fixed the update mess. Google should be in charge and ship security updates, not OEMs. You don't see Dell responsible for Windows security updates.
1. Release binary-only updates (opt-in).
2. Let the community (a) make GPL source requests for any GPLed components and (b) let the community reverse engineer the vulnerabilities from the binary updates.
3. Publish the source once everything is public anyways.
Which just shows how utterly ridiculous all this is.
One thing that seems positive is that it is now possible to release binary patches earlier than before, isn't it? My understanding is that before, OEMs had to wait for 1 month, and now they can release the binary patches right away.
I see a lot of people saying how the whole thing is completely ridiculous, but this part seems like a win.
On a Pixel from 6 upwards? Absolutely. GrapheneOS is what Android should be in terms of privacy and security. Its major drawback is only being available on Pixels, but if that is what you have…
I bought my Pixel 6 specifically to run GrapheneOS, and I really hope I can repeat that for my next device.
It's a crying shame that there isn't a Graphene compatible phone that also has a micro SD slot and headphone jack. The perfect phone just doesn't exist in our timeline
My rule is: if you can run GrapheneOS, you should run GrapheneOS.
My second rule is: if you are buying a new phone and can afford one that supports GrapheneOS (at the moment it means a Pixel), then you should go for that.
I haven't used LineageOS for a long time, but I remember it being really good. And I used cyanogenmod which IIRC was it's predecessor.
GraapheneOS is a different ball game IMO, especially if you need to use Google play services etc, banking apps etc. I'm not sure what the current state of microg is or Google services on lineage.
I bought a second hand pixel 6, just for Graphene and when that died I bought another pixel.
This is ridiculous. Makes one wonder about the state of OEM development. It's not hard to build a CI pipeline for android. There is no good reason OEMs can't be running test builds of ROMs with security patches within hours, and have QA done in a day or two, or a week max.
>Makes one wonder about the state of OEM development.
Why wonder at all, it sucks and it's security is generally in shambles. Security is rarely very high on their priorities as features/prettiness is what sells their phones.
i don't understand googles rationale here, what is the point in giving wind to the hackers sails while also driving home the narrative that android is a less secure system, especially after the recent changes related to the security of the latest iphone?
They just can't make an official release with it, because they can't publish the patch sources (embargoed) and their releases being open-source must match what they published...
Related discussion earlier this week, https://news.ycombinator.com/item?id=45158523
So basically to summarize, Google embargoes security patches for four months so OEMs can push out updates more slowly. And if those patches were immediately added to an open source project like GrapheneOS, attackers would gain info on the vulnerabilities before OEMs provide updates (the GrapheneOS project can see the patches, but they can't ship them). But a lot of patches end up being leaked anyway, so the delay ends up being pointless.
The stupidest part is that, according to the thread, OEMs are allowed to provide binary only patches before the embargo ends, making the whole thing nonsensical since it's trivial to figure out the vulnerabilities from the binaries.
Fun fact: Google actually owns the most commonly used tool, BinDiff ;)
Unless the OEMs bundle numerous changes with the security patch(es).
(I'm not saying it happens. I just theorise how the policy could have been envisaged)
In the good old days, there were exploits patched years prior by some OEMs that were never upstreamed even to Google. New rooting apps come out and... just doesn't work. I don't know if that still happens, though.
Not really.. numerous changes are still not a total redesign of whichever subsystem was affected so it's pretty obvious where some small security relevant changes are. A stupid embargo was always enough to ruin security by code analysis for white hats but never enough to stop attacks by code analysis for black hats.
How does this work legally? If Android AOSP is open-source, once one OEM updates, surely the owner gets the legal right to request sources. IIRC the maximum delay is 30 days.
Almost all of AOSP is under the Apache or BSD licenses, not the GPL. Very few GPL components remain (the kernel being the large and obvious one).
So, yes, making a GPL request will work for the very few components still under GPL, if a vendor releases a binary patch. But for most things outside of the kernel, patch diffing comes back into play, just like on every closed-source OS.
weird tangential question then: when does GPL stop being infectious?
I would understand in a modular system like an operating system: one can argue that the kernel is a single component.
But if you're buying an appliance, the OS is effectively one single unit: all linked together.
Why does a binary executable and a binary image seem to operate differently in this space - both are inscrutable?
The FSF has always been pretty clear on this: you use a linker (static or dynamic) = it applies; you don't = it doesn't. They even wrote LGPL with this distinction in mind, and introduced exceptions to yacc (bison) to accommodate non-free software.
In case of binary releases, you can request the sources of the relevant subcomponent (e.g. the kernel). The component boundaries are pretty clear wrt Linux: Torvalds has made it quite clear early on, that the kernel's GPL2 does not apply to anything in the user space.
Here also, the important distinction between GPL 2 & 3: with GPL3, it would be a breach of the license to ship code on a device that does not allow the end user to update that code. Which has effectively pushed everyone away from GPL3-licensed software.
IMHO the move to GPL3 has likely caused more harm than good to the FOSS ecosystem; in some alternative universe, GPL3 never happened, most of Android's userspace is GPL2, and we get the source for everything. In both universes we still don't get to deploy changes to devices we own, so IMHO the GPL3 won us nothing.
The FSF considers linking to be a definite example of derived works in general, but I don't believe they consider lack of linking to prove that something isn't a derived work.
The goal of the GPL is to flip draconian copyright maximalism on its head, and copyright laws don't talk about linkers so that can't be the deciding factor. Not to mention that it would be trivial to work around linking by creating a stub and calling the GPL code as a subprogram (in kernel contexts a spiritually similar setup is called the "GPL condom" and my impression is that most lawyers not employed by NVIDIA consider this to not be a get-out-of-jail-free card).
> (in kernel contexts a spiritually similar setup is called the "GPL condom" and my impression is that most lawyers not employed by NVIDIA consider this to not be a get-out-of-jail-free card).
The whole thing with Linux's conception is that it's predicated on any and all unlicensed usage of GPL-only interfaces being copyright infringement of other usage in the kernel source. This is an extremely broad claim to make in general (especially in light of Google v. Oracle), and the 'GPL condom' approach is just to further ensure that the unlicensed side is textually unrelated to the kernel. When there's no infringement, the copyright holders can't do a single thing, except to technologically make it harder on you.
Meanwhile, the whole GPL idea of linking vs. statically embedding is only applicable when you're shipping someone else's GPL-licensed code alongside your non-licensed code, in which case you're bound by its terms. If you're not shipping someone else's code, then there's plenty of ways to force a particular build, etc., in the manner that the GPL is trying to prevent. Heaven knows I've likely violated the spirit of the GPL before just through Hyrum's law.
I see what you're getting at, but on the other hand there is also a difference between APIs that are intended for use by third parties that are just "regular usage of the program" and internal functions that are being exposed due to technical factors in how the source code is organised (i.e., the fact that Linux organises its code into loadable modules and does not expose all symbols to try to avoid needless breakages).
To be clear -- the general view is that the GPL is viral in both cases (in fact the general view is that any user of the published interfaces of a GPL-licensed library is a derived work -- even in cases not involving compilation or linking), but I think the kernel module case is even more clear-cut than that.
In my view, the fact that the Linux kernel interfaces change incredibly frequently in every release specifically in response to internal code changes really makes it hard to believe that usage out of tree is just the same as using the syscall interface (which is what NVIDIA et al. tend to argue). (Note that the Linux syscall exception is actually not the license for the entirety of Linux -- almost none of my code contributions have been under the syscall exception and the same is true for almost all Linux contributions.)
For what it's worth, I think the distinction between EXPORT_SYMBOL and EXPORT_SYMBOL_GPL has been a net harm to any discussion about module-related GPL violations, precisely because there isn't an obvious line you can draw between their usage and it just muddies the waters unnecessarily (recent attempts to further lock this down seem to indicate some kernel developers agree that this was a mistake). If you imagine an analogous case with a Python program and someone adding files to it which modify the internal state of the original program through interfaces that were only visible because of technical aspects of code organisation, the case becomes far more clear and I don't think further technical shenanigans solve the underlying legal issue.
Google v. Oracle was also about copying the interface itself and whether replicating said interface was fair use (which I think everyone except Oracle would find to obviously be true, otherwise the entire history of GNU and Linux would be one of copyright infringement). It was not at all concerned with creating combined works through the use of an interface. You could try to make the argument that (in light of Google v. Oracle) that the generally accepted view of users of a GPL-licensed library being derived works of said library is somewhat questionable, but I think that's a separate discussion (as I said, I think the module discussion is even more clear-cut).
The GPL works because you create a derived work. The legal ability to create a derived work is restricted by copyright. They grant you an exception to the default restriction of copyright, but only if you comply with all of these terms.
The most obvious way around it is to claim the thing you created is not a derived work. Nvidia probably has a good case for this because 99% of their code has absolutely nothing to do with the Linux kernel; making an adapter so you can plug that 99% into the kernel does not make it magically derived from it. Some random hardware driver doesn't have that same claim of being mostly independent of the kernel - graphics drivers are particularly complicated.
Regardless of whether the driver is a derivative work of the kernel, the combination of the kernel and the driver is obviously derivative of both its parts and so you have to comply if you ship that, too.
That is explicitly mentioned in GPL FAQ:
https://www.gnu.org/licenses/gpl-faq.html#MereAggregation
The short answer is copyright law and jurisprudence. The whole purpose of copyleft is to flip draconian copyright regimes over and make them protect users instead, so the GPL generally has the most maximalist stance allowed by copyright law. If copyright law would say that combining or extending software in a particular way is not fair use then the GPL generally would render the combination GPL.
In practice, GPLv2 would not be viral in the way you describe unless you can show that all of Android is a derived work of Linux (not true). GPLv3 would require users be avle to replace components under said license which has an impact on how such an appliance need to work (though the GPLv2 does also have somewhat related text about "the scripts which control installation") but wouldn't expand the scope of code under the license, just the terms.
Sadly, the layperson's and lay developer's interpretation of the GPL has been watered down over the years, and the GPL wasn't maximalist enough to begin with - see AGPL, SSPL for extensions created by people who saw new kinds of linking that didn't appear to be covered by the GPL. Of course big corporations preferably use these new kinds of linking that aren't covered, which is why new licenses are necessary.
> the OS is effectively one single unit: all linked together
If your appliance runs linux it has separate components just like desktop linux.
You want to do as little as possible in kernel space, and depending on the appliance there isn't even any need for it.
So, like desktop linux, you can have closed source binaries on top of the kernel.
I just don’t see the distinction as clearly when it’s a single binary that cannot be decoupled or introspected.
Why is it if I build a static binary with GPL code and distribute it I must open source my changes; but if you do the same as a whole OS it’s not necessary.
Feels like it should all be fine or none of it is fine somehow.
If you go that way, every bash script you've ever written should also be under the GPL.
And R code, at a quick glance.
only if someone was to copy the entire script and then put it into their distributed binary; which is what I am saying that the OEMs are doing with Linux.
> when does GPL stop being infectious?
Either a) when the license has an explicit exemption (such as at glibc or the kernel's userspace interfaces) or b) when something ceases to be a "derivative work" in copyright terms (which is ultimately a legal question for lawyers).
There's a concept of "separate works", see for example https://www.gnu.org/licenses/gpl-faq.html#GPLCompatInstaller .
Tangentially, I assumed that the GPL must have some built-in exception for running non-GPL userspace programs on top of a GPLed kernel (similar to the System Library exception). However, it seems like it doesn't, since the Linux kernel has its own exception to allow this: https://spdx.org/licenses/Linux-syscall-note.html.
Note that the Linux syscall exemption is actually not the license of the entirety of Linux, because most code contributed to Linux is under the standard GPLv2. It's just a red herring -- there is no need for such an exemption because the generally held view is that such programs are not derived works of Linux (from a copyright law perspective) in the first place.
Have you ever tried requesting the source code for your phone?
They'll either ignore you, or give you something that is obviously not the source code (e.g. huge missing sections; often they'll only produce kernel code and not even a way to compile it). Law be damned. They don't follow it and nobody is forcing them to
> the delay ends up being pointless
Why though? It is pointless from the engineering and security standpoints, but for Google this may serve their goals very well.
Fuck, and I cannot emphasize this enough, the OEMs.
I am so sick of security being compromised so stupid, lazy people don't have to do their jobs efficiently. Not like this is even unusual.
I don't think it is laziness per se. It's a combination of having far too many models (just look at Samsung's line-up, more than ten models per year if we don't count all the F and W variants), using many different SoCs from different vendors (just taking Samsung again as an example, using Qualcomm Snapdragon, Samsung Exynos, Mediatek Helio, Mediatek Dimensity, sometimes even a different chipset for the same phone model per region), each model supported for multiple years now on a monthly or quarterly update schedule (Samsung: recent A5x, Sxx, Sxx FE, Z Flip x, Z Flip 7 FE, Z fold x, Xcover x, etc. are on a monthly schedule). This across a multitude of kernel versions, AOSP versions (for older phones), OneUI versions (for phones that haven't been updated yet to the latest OneUI).
The must have literally over tens of different models to roll out security updates for, with many different SoCs and software versions to target.
And compared to other Android vendors, Samsung is actually pretty fast with updates.
It's true that other manufacturers have smaller line-ups, but they also tend to be smaller companies.
Compare that with Apple: every yearly phone uses the same SoC, only with variations in simpler things like CPU/GPU core counts.
To me this is the ultimate failing of ARM as an ISA, the fact that you even need to consider "targeting" allows a deficient ISA like x86 to still stand head and shoulders above it in terms of OEM support (though perhaps not security)
It has nothing to do with the ISA and everything to do with the system architecture. Look up PC-98.
Also: PC being a "standard" is a lie; ACPI is a horror.
> I don't think it is laziness per se
You forgot the "stupid" part.
> It's a combination of having far too many models (just look at Samsung's line-up, more than ten models per year if we don't count all the F and W variants), using many different SoCs from different vendors > [...] > This across a multitude of kernel versions, AOSP versions (for older phones), OneUI versions (for phones that haven't been updated yet to the latest OneUI).
Those are choices. If you want to do that, you need a process that can support it.
I suppose it could be that they just don't care and are deliberately screwing their users, but never attribute to malice that which can be explained by incompetence and all that.
>> Those are choices. If you want to do that, you need a process that can support it.
__need__ is doing a lot of work here. There is no forcing function to get OEMs to do this ASAP: 1) the market doesn't really care that much 2) there are no regulations around this (and even if they were, can you immediately recall a tech exec going to jail for breaking the law ... )
> the market doesn't really care that much
This. Pixels are not more expensive than flagship Samsungs. If people cared and bought Pixels because they get the security updates, then Samsung (and the others) would follow. But people don't care, so the OEMs don't do it.
It's kinda weird to single out Samsung here, because they are pretty good with security updates and they explicitly talk about long security periods in their marketing. They are not as fast as Pixel, but somewhere mid-range and up (A5x) get monthly updates and they are usually 1-4 weeks behind Google.
It's the other vendors that are the issue. Even Fairphone is behind a lot (and they only release one model at a time).
The "(and others)" part was about including the other OEMs :-). I used the Samsung flagship as a specific example because it is very expensive, and people who buy it don't have the excuse of the price.
Those are choices. If you want to do that, you need a process that can support it.
I suppose it could be that they just don't care and are deliberately screwing their users, but never attribute to malice that which can be explained by incompetence and all that.
I think for a long time Android users did not really care. Until a few years, Android security support was abysmal with many vendors only doing 1-2 years of updates. Users bought the phones and didn't care, so I guess it was a smart business move to not care.
This changed in recent years due to a mixture of the (then) upcoming EU requirement for supporting devices multiple years with security updates, Apple being able to tout this as an advantage, causing Google and Samsung to enter into a competition to promise the largest number of years of security support, etc.
Welcome to Android. It started out a bit undercooked and Google relied on OEMs to make finished polished products. Then the reality that OEMs suck at software hit them in the face. They spent years acquiring more control of their platform while trying not to piss off Samsung.
Pretty much this... and even then, they still suck hard. Apple was right to start off with as much control over their platform as they did. The only reason I never went with iPhone is it started as an AT&T exclusive, and you couldn't pay me enough to be their customer ever again.
The bigger headline is that Google is effectively giving attackers 3-4 months of advanced access to security patches: https://grapheneos.social/@GrapheneOS/115164183840111564.
Have you considered the possibility that this may not be motivated by security at all, given the recent spate of similarly illogical and somewhat hostile decisions?
CIA wins!
> Android security patches are almost entirely quarterly instead of monthly to make it easier for OEMs. They're giving OEMs 3-4 months of early access which we know for a fact is being widely leaked including to attackers.
Android is is over 15 years old and Google still hasn't fixed the update mess. Google should be in charge and ship security updates, not OEMs. You don't see Dell responsible for Windows security updates.
The CRA should help here hopefully. See cyber resilience act Article 14 – Reporting obligations of manufacturers https://www.cyberresilienceact.eu/the-cyber-resilience-act/#
The solution (heavily) alluded to by GrapheneOS in https://grapheneos.social/@GrapheneOS/115164212472627210 and https://grapheneos.social/@GrapheneOS/115165250870239451 is:
1. Release binary-only updates (opt-in). 2. Let the community (a) make GPL source requests for any GPLed components and (b) let the community reverse engineer the vulnerabilities from the binary updates. 3. Publish the source once everything is public anyways.
Which just shows how utterly ridiculous all this is.
One thing that seems positive is that it is now possible to release binary patches earlier than before, isn't it? My understanding is that before, OEMs had to wait for 1 month, and now they can release the binary patches right away.
I see a lot of people saying how the whole thing is completely ridiculous, but this part seems like a win.
I currently use LineageOS on my pixel. Is it worth trying GraphineOS?
On a Pixel from 6 upwards? Absolutely. GrapheneOS is what Android should be in terms of privacy and security. Its major drawback is only being available on Pixels, but if that is what you have…
I bought my Pixel 6 specifically to run GrapheneOS, and I really hope I can repeat that for my next device.
It's a crying shame that there isn't a Graphene compatible phone that also has a micro SD slot and headphone jack. The perfect phone just doesn't exist in our timeline
My rule is: if you can run GrapheneOS, you should run GrapheneOS.
My second rule is: if you are buying a new phone and can afford one that supports GrapheneOS (at the moment it means a Pixel), then you should go for that.
I haven't used LineageOS for a long time, but I remember it being really good. And I used cyanogenmod which IIRC was it's predecessor.
GraapheneOS is a different ball game IMO, especially if you need to use Google play services etc, banking apps etc. I'm not sure what the current state of microg is or Google services on lineage.
I bought a second hand pixel 6, just for Graphene and when that died I bought another pixel.
GrapheneOS is by far the better OS security and privacy wise.
It should be the default choice for everyone IMO, as long as they have a phone that supports it.
See this comparison: https://eylenburg.github.io/android_comparison.htm
This is ridiculous. Makes one wonder about the state of OEM development. It's not hard to build a CI pipeline for android. There is no good reason OEMs can't be running test builds of ROMs with security patches within hours, and have QA done in a day or two, or a week max.
> There is no good reason OEMs can't be running test builds of ROMs with security patches within hours
That sounds like it costs money and doesn’t net the mfg new sales.
Trying to do software well at hardware centric companies is hard. Mostly for this reasoning.
>Makes one wonder about the state of OEM development.
Why wonder at all, it sucks and it's security is generally in shambles. Security is rarely very high on their priorities as features/prettiness is what sells their phones.
The only responsible disclosure is full disclosure.
i don't understand googles rationale here, what is the point in giving wind to the hackers sails while also driving home the narrative that android is a less secure system, especially after the recent changes related to the security of the latest iphone?
They are giving a chance for government agencies to hack Graphene phones.
You mean the changes Pixels phones had since late 2021 ? /s https://grapheneos.social/@GrapheneOS/115176133102237994
we're talking about OEM devices aren't we?
llyou_m1233
If the smart plan of having others reverse-engineer the fixes won't work, I imagine they'll turn into a delayed-source product.
To my recollection, they always maintained that being open-source doesn't matter for security, after all
(I strongly disagree)
Don't trust these guys.
That's not helpful without context and substance.
"They can easily get it from OEMs or even make an OEM."[0]
I agree with their points in the thread, but could Graphene "become" an OEM to get access to the security patches sooner? Just curious.
[0] https://grapheneos.social/@GrapheneOS/115164297480036952
They have access to the patches.
They just can't make an official release with it, because they can't publish the patch sources (embargoed) and their releases being open-source must match what they published...
They have an OEM partner right now who funnels them the updates, which is how they get access to them.
Is it Framework?
Why would Framework have access to Android patches?
You're absolutely right
Motorola/Lenovo then?