Open source self-tied knots

Open source self-tied knots

Summary: Pragmatism demands that proprietary modules be allowed to load in Linux, and requires the creation of a consistent kernel interface for driver developers. Unfortunality, ideals militate against these requirements.

SHARE:
TOPICS: Open Source
53

I'd known about the debate between open source pragmatists such as Linus Torvalds, who believes that proprietary modules in Linux are permissible provided they're not derived from the Linux kernel, and the license purists such as the FSF which states, in the words of FSF attorney Eben Moglen, that "if the kernel were pure GPL in its license terms...you couldn't link proprietary video drivers into it, whether dynamically or statically."

Funny tug of war, that. You have the person who started the whole Linux project and decided which license it would use describing how he understands the terms of that license pulling against the people who wrote the license and feel their interpretation trumps that of the copyright holder. I don't know about you, but that would annoy the hell out of me.

But the FSF, under Stallman, is all about control over software. You've heard of submarine patents. Well, there's also such a thing as a submarine open source license, and if you think every person who applied the GPL to their code fully understood the ramifications of stating that their software is covered by GPL v2 or "any later versions," you should have your head examined.

Frankly, the FSF interpretation is, to my mind, insane. Why on earth would a license designed to prevent you from EXTENDING the code without releasing source code details of those changes (which, as I've said before, is an efficient way to prevent forking in open source projects) allow it to determine the license used by binary modules that get loaded into the system? If those modules were based on GPLed code, then yes, they should be released under a GPL. I might even argue that if the kernel REQUIRED the presence of those modules to run, then they should be covered by the GPL. However, graphics cards are interchangeable, making them optional add-on components.

If you're a purist and want a completely open source operating system, then you can choose a graphics card for which GPLed drivers exist. Intel, apparently, is on board with this (no pun intended). If you don't care, though, why does the FSF have the right to force you to be free in ways they think you should be (which is really freedom for other people, as most users could care less if they have access to the source code of their drivers).

Yet again, the Free Software Foundation is insisting that the contributions of proprietary companies is unwelcome and unnecessary, even though the facts of software history show that proprietary software is the largest generator of use value in existence. This is silly, IMHO, but pragmatism is a principle completely lost on the FSF.

Yesterday's article on the open source tug of war, however, informed me of something I didn't know, which was that Linux lacks a stable interface to the kernel.

A stable interface provides a fixed and documented way for a driver to communicate with the kernel. Even if the kernel interior changed, the method of communication would remain the same, and drivers wouldn't have to change with kernel updates, for example.

...

With the existing fluid interface in Linux, programmers must provide drivers for numerous kernel variations, and old drivers--open or proprietary--stop working, said Miguel de Icaza, vice president of development at Novell. "Contrast this with Windows, where there is a stable interface for drivers in the kernel. A driver developed against NT 4 works on XP," he said.

Some, according to the article, defend this as providing maximum scope for innovation, or as a firewall against the spread of proprietary drivers upon which Linux must rely. But if you get down to brass tacks, what is really being said here is that Linux has no standard interface for driver development.

Imagine if my employer Microsoft did something like that.  Perhaps they defend the lack of a consistent API as a means of giving Windows developers maximum flexibility, or to prevent any one graphics card company becoming too dominant on the Windows platform. It would certainly solve some of Microsoft's legal problems. Sorry, European Commission, we can't document our server APIs because they don't exist.

The chorus of laughter would be deafening.  The same should be the case in Linux. 

Standard APIs are job number one of a product that aims to become the new dominant ecosystem...and yes, that is what most Linux developers clearly want. I'm quite surprised that this is still a problem. ATI may claim that they accept the fluidity of the kernel interface "as part of our day-to-day responsibilities in Linux," but I bet that is said through clenched teeth after months trying to get a driver to work across distributions.

Fragmentation didn't work for old-school Unix. Linux solved the structural issue by providing a level of consistency made possible through use of the GPL. It's worth remembering that before attempting to justify an unjustifiable lack of a consistent Linux kernel interface.

Topic: Open Source

John Carroll

About John Carroll

John Carroll has delivered his opinion on ZDNet since the last millennium. Since May 2008, he is no longer a Microsoft employee. He is currently working at a unified messaging-related startup.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

53 comments
Log in or register to join the discussion
  • I don't fully understand

    the complexities of the Linux kernel, and why the driver situation is what it is, but I would bet that Linus has faced this issue many times. Creating a set of APIs would/could create an extra layer that would potentially slow people down with the context switching et al. HP-UX relies heavily on "banging the hardware" - i.e. direct connections into the kernel that make it 20% faster than its rivals (for similar clock speeds). Of course this has its drawbacks (being stuck on ONC 1.2 while everyone else is on ONC 4.0).

    As OSes develop and offer more features, you would need to "update" the APIs. This API "churn" can be bad for your reputation (HEY, you changed something that I depend on!). This also brings up the "secret API" issue where you can define "internal" APIs and not publicize the fact (not that anyone would do this . . .). Maybe Linux just doesn't want to get into that bucket of trouble.
    Roger Ramjet
    • Secret interfaces

      [i]As OSes develop and offer more features, you would need to "update" the APIs. This API "churn" can be bad for your reputation (HEY, you changed something that I depend on!). This also brings up the "secret API" issue where you can define "internal" APIs and not publicize the fact (not that anyone would do this . . .). Maybe Linux just doesn't want to get into that bucket of trouble.[/i]

      Well, there certainly aren't "secret" interfaces. On the other hand, some are more stable than others. The software engineering principles apply, so making external accesses to the internal functions or data of a different part of the kernel will get shot down -- Linus is absolute death on stuff like that.

      Of course, that doesn't stop people from misusing source access. You see it a fair bit in code that companies never intend to merge into the main kernel; I'm up against a VPN program that only works for 2.6.12 and before because the nitwits called a function that they shouldn't have.

      The same thing (at least used to) happens at Microsoft, where access to the system internals gets used to do things outside of the official APIs. Tin-foil hat theories aside, those "secret API" calls that are so well documented are, arguably, just a matter of programmers taking advantage of "unstable" interfaces.
      Yagotta B. Kidding
    • Re: I don't

      [i]Creating a set of APIs would/could create an extra layer that would potentially slow people down with the context switching et al.[/i]

      And ditching C-style APIs in favor of assembly language calls would speed things up even more. It seems a huge hassle to me, for an OS that already struggles with driver compatibility, not to have a standard interface for drivers, particularly given that the biggest competitor - Windows - already does.

      [i]As OSes develop and offer more features, you would need to "update" the APIs.[/i]

      Yes. Vista has a new driver architecture. The old architecture has remained consistent for a long time, making it vastly easier to write device drivers. Basically, you make a best guess effort to define the interface, and in the interest of economics of scale, you do your level best to hew to that architecture for as long as possible. Change is still possible, but in the interim, people have a stable layer they can write drivers against.

      [i]This also brings up the "secret API" issue where you can define "internal" APIs and not publicize the fact (not that anyone would do this . . .).[/i]

      How, in an open source OS, would that be possible? All you are doing is defining a standard interface.

      You really are defending the indefensible, IMO.
      John Carroll
      • Stability is rarely the issue

        The driver API of the Linux kernel is stable. It hasn't significantly changed since the 2.6 revision began. The same was true in the 2.4 series.

        You've based your opinion of there being no standard in Linux kernel drivers on the opinion of a reporters interpretation of a manager's interpretation of a developer's word. All that is required for a driver to work with a microrevision upgrade is a recompile.

        This need to recompile is a well-known weakness of the basic architecture of the monolithic Linux kernel. The API, the standard if you will, changes rarely, if at all, but the location of the jump tables within the kernel do change when the kernel is recompiled. Since an older pre-compiled driver contains calls that point to the wrong points; the driver stops working. A recompile usually fixes the problem.

        Sometimes a kernel distributor will set permissions so that binary only drivers cannot run. This is not an API change. This is a distribution mistake made by a GPL distributor, since under GPL freedom 0 it is the user, not the distributor who makes the decision to run binary only drivers.
        John Le'Brecage
        • It sounds like...

          ...all you'd need is a small bit of code that acts as a shim, and the rest of the code could be proprietary (and a pre-compiled binary). However, then you run into the problem of "what is the interface of the shim-wrapped creature."

          One solution is to have the complete source code, thus getting around the need for an "inner" interface. However, I'm not a big fan of providing the source code just so you can recompile all the drivers, and that certainly isn't something proprietary companies want to do.

          This still seems like a knot, one that makes it hard for proprietary companies to target Linux.
          John Carroll
          • And this is a problem?

            [i]It sounds like all you'd need is a small bit of code that acts as a shim, and the rest of the code could be proprietary (and a pre-compiled binary). However, then you run into the problem of "what is the interface of the shim-wrapped creature."[/i]

            It's whatever said proprietary company wants it to be, John. Surely you're not telling us that's wrong?

            [i]One solution is to have the complete source code, thus getting around the need for an "inner" interface. However, I'm not a big fan of providing the source code just so you can recompile all the drivers, and that certainly isn't something proprietary companies want to do.[/i]

            Of course you're not a fan. If you were, you wouldn't be where you are. On the other hand, quite a few companies (Intel, have you heard of them?) seem to find it quite to their benefit. Intel Linux driver development teams, for instance, have working tested drivers while the MS driver teams are still holding kickoff meetings. With much smaller teams.

            Unlike Microsoft, driver development for hardware companies is purely a cost center. Denying others the opportunity to participate is, shall we say, not obviously to their advantage?

            [i]This still seems like a knot, one that makes it hard for proprietary companies to target Linux.[/i]

            I hate to break the news to you, but not every company on Earth that develops software has the same business model as Microsoft.
            Yagotta B. Kidding
          • Re:

            [i]It's whatever said proprietary company wants it to be, John. Surely you're not telling us that's wrong?[/i]

            Why is a standard interface useful, then, if it is BETTER for each company to make their own internal interface? Is there no advantage to a standard interface that everyone in the industry uses, thus making driver development that much easier?

            I'm thinking economics of scale here, a place where Linux drivers seem to fall down, on my estimation.

            My opinion, but the approach taken by Linux would seem to make it likely that driver developers abandon older builds sooner rather than later. If you have the source code, then a developer will probably find a solution. Perhaps that's why they want to have source code for everything, to support a driver extensibility model that requires recompilation as the kernel changes.

            [i]Unlike Microsoft, driver development for hardware companies is purely a cost center. Denying others the opportunity to participate is, shall we say, not obviously to their advantage?[/i]

            [i]Of course you're not a fan. If you were, you wouldn't be where you are. On the other hand, quite a few companies (Intel, have you heard of them?) seem to find it quite to their benefit.[/i]

            Intel is hardly ATI, and has an interest in breaking into the graphics market in ways they currently aren't.

            It's possible to open source Windows driver development, too. The company in question just has to CHOOSE to do so.

            [i]I hate to break the news to you, but not every company on Earth that develops software has the same business model as Microsoft.[/i]

            Fine, but Microsoft is far and away the most successful one, and if you choose not to use a design tactic that has proven very successful for them, you should have a good reason why. Frankly, I doubt Linux fans have thought about that given their strong antipathy for anything Microsoft.
            John Carroll
          • Highlander

            [i]Why is a standard interface useful, then, if it is BETTER for each company to make their own internal interface?[/i]

            Ever read Kipling's "In the Neolithic Age?"

            There are advantages to using a common ABI. There are advantages to publishing drivers as source code. There are advantages to having a common system-independent driver with its own API that uses a custom shim to reconcile the driver's needs with that of the system.

            However, there's a difference between "better" and "the only way."

            [i]Is there no advantage to a standard interface that everyone in the industry uses, thus making driver development that much easier?[/i]

            Driver development does have a standard interface. Or are you conflating "development" with "distribution?"

            [i]I'm thinking economics of scale here, a place where Linux drivers seem to fall down, on my estimation.[/i]

            Again, you're trying to say that because Linux isn't controlled from a single command center which has the power to push the burden of driver development and distribution onto the hardware vendors, a system that is adapted to support Microsoft's business doesn't fit. Pardon my amazement.

            Would it surprise you greatly to find that that model isn't used much, precisely because [b]it[/b] scales poorly (even for Microsoft.) Most hardware vendors only support a very limited range of MS platforms precisely because of those economies of scale, whereas the same hardware is supported by distribution maintainers for just about every version of Linux ever shipped.

            Which scales better: the model that has two bottlenecks (Microsoft and the hardware vendor) or the model where the resources available to the job are proportional to the number of people interested? To help answer the question, pull a random network adapter out of the junk pile and try to install it on the platform of your choice. (It's an experiment I've run before.)

            [i]Fine, but Microsoft is far and away the most successful one, and if you choose not to use a design tactic that has proven very successful for them, you should have a good reason why.[/i]

            Not having monopoly power is usually a strong reason to not apply business methods that depend on monopoly power.

            [i]Frankly, I doubt Linux fans have thought about that given their strong antipathy for anything Microsoft.[/i]

            Microsoft never entered into the subject. Linux device driver development follows the Unix model (which predates the founding of Microsoft) and more recently has strong roots in the fact that very few hardware vendors support Linux directly.

            Under the circumstances, following your advice to rely on hardware vendors for driver development would, to put it mildly, be unwise.
            Yagotta B. Kidding
      • Strangely enough

        The drivers for Windows XP and Windows XP 64-bit required different binaries...

        THIS is what people are talking about when they try to enlighten you about the difference between A[b]P[/b]I and A[b]B[/b]I.

        Really John, it's not a matter of "defending the indefensible" but you actually acknowledging the issue instead of dancing around it.
        Robert Crocker
  • Stable Interfaces

    [i]Imagine if my employer Microsoft did something like that. Perhaps they defend the lack of a consistent API as a means of giving Windows developers maximum flexibility, or to prevent any one graphics card company becoming too dominant on the Windows platform. It would certainly solve some of Microsoft's legal problems. Sorry, European Commission, we can't document our server APIs because they don't exist.[/i]

    The issue isn't stable a[u]P[/u]Is, it's stable A[u]B[/u]Is. The kernel binary interfaces aren't locked down, which means that modules have to be linked to the kernel.

    The programming interfaces, on the other hand, are quite stable as long as the developer follows the guidelines (and, yes, I've had problems with software that tries to use unstable functions. Having source doesn't mean you should use it.)

    Think a minute -- having the programming interfaces constantly changing makes for [b]waaaay[/b] too much opportunity to break things. If that isn't persuasive, it also makes far more work for the kernel team. Enlightened self interest and all that.

    The solution is actually quite simple: those who want to distribute binary kernel interfaces do so by inserting a source-code "shim" layer which can be compiled against the current kernel and connect it to the binary driver module.

    This is what NVidia does: they have a common driver code for both Microsoft and Linux platforms with a wrapper for each. The wrapper is open source (LGPL, IIRC) and thus arguably satisfies the legalities.
    Yagotta B. Kidding
    • Tom-ay-to, to-mah-to

      API, ABI, call it what you will, but this is semantic. Fine, it's not an API in the sense that clients build applications atop it. It is an interface the the OS uses for loadable modules, and involves the same basic principle as an API.

      You solution, which is: [i]those who want to distribute binary kernel interfaces do so by inserting a source-code "shim" layer which can be compiled against the current kernel and connect it to the binary driver module[/i], is hardly satisfying. You still need to keep on top of distributions. Granted, your job is made easier by properly compartmentalizing things and writing a shim layer (which is good engineering), but it would be made even EASIER and compatibility would be more assured if such a standardized ABI layer existed.

      Loved this statement:

      [i]The wrapper is open source (LGPL, IIRC) and thus arguably satisfies the legalities.[/i]

      Don't ask the FSF, as they are likely not to agree...but that's why you threw in the caveat "arguably." It certainly doesn't make the whole package (shim, core library) valid to load inside the Linux kernel, at least according to the FSF.
      John Carroll
      • User=Id10t

        John,

        Really, must you be so dense?

        ABI Means application BINARY interface. That means that the compiled code is different.

        API mean application PROGRAMMING interface. That means the calls used by the code to actually talk to the application (the kernel in this case).

        Now considering all the targets for the Linux kernel including x86 and x86-64 do you think it is possible to have one binary that supports both?

        Instead wouldn't it be better for the calls to remain the same and the binary be compiled for the applicable target?
        Robert Crocker
      • Fathometer

        [i]API, ABI, call it what you will, but this is semantic.[/i]

        John, if you don't know the difference between API and ABI bindings, I'd suggest talking to a programmer before writing a column on the subject.

        [i]You still need to keep on top of distributions.[/i]

        Your experience for making this expert statement is ...?

        Note that I have already posted WRT use of kernel-bound binary drivers (ATI, NVidia, VMWare, Intel Wireless) on unsupported distributions. No problemo. Works like a champ.

        So, since you see a problem where I don't, perhaps you could elucidate on your qualifications as a Linux kernel-level programmer?

        The problem appears to be that you are attempting to view a business model issue (who packages device drivers) as a programming issue, when in fact the problem arises because you're trying to force a foreign business model (Microsoft's get-your-drivers-from-the-vendor model) on Linux, which is organized differently. I am aware of the benefits that Microsoft's model confers on it, but that doesn't mean that it's the only viable one. In fact, absent Microsoft's market power it's doubtful that Microsoft's model would work.

        [i][b]You[/b] still need to keep on top of distributions.[/i]

        Presumably addressed to the hardware vendor and completely false. The distros can handle any presumptive customization themselves, and do.
        Yagotta B. Kidding
        • You're right Yagotta...

          You and me have definitely got to start chatting; if only to save ourselves typing and thinking time. Since I've utterly failed to find you, why not just look for me?
          John Le'Brecage
          • Ping JlB

            [i]You and me have definitely got to start chatting[/i]

            You might want to check mail at ids.
            Yagotta B. Kidding
        • So...

          [i]You still need to keep on top of distributions.

          Presumably addressed to the hardware vendor and completely false. The distros can handle any presumptive customization themselves, and do.[/i]

          So, are you saying that distributions will release shims for all device drivers that exist on their distribution? That would be one heck of a job on Windows, though it can be done.

          Likewise, who defines the interface followed by the thing wrapped by the shim?

          Hey, if you show me why this isn't a problem, I will explain the situation in a subsequent blog.
          John Carroll
          • Shimming and Shaking

            [i]So, are you saying that distributions will release shims for all device drivers that exist on their distribution?[/i]

            Yes, although very few drivers require "shimming."

            [i]That would be one heck of a job on Windows, though it can be done.[/i]

            You're still conflating "development" and "distribution."

            Your prior complaint was that compiling the shim layer was too much trouble. Well, compiling from source is what distribution maintainer do.

            It's not like it's hard or anything. Why, it can even be automated.

            [i]Likewise, who defines the interface followed by the thing wrapped by the shim?[/i]

            John, you're smarter than this. "The thing wrapped by the shim" is the hardware vendor's. They define it to suit themselves, that's the whole point. The "shim" simply fits their interface to the kernel's API.

            Note, again, that this is what NVidia does [i]for both Linux and Microsoft.[/i]

            [i]Hey, if you show me why this isn't a problem, I will explain the situation in a subsequent blog.[/i]

            You could turn it into an interview with NVidia, if you like.

            OK, here's an example: [b]ndiswrapper[/i] http://ndiswrapper.sourceforge.net

            The "thing wrapped by the shim" is a driver [i]for MSWindows.[/i] Full binary blob thingy from whoever. Obviously, the interface is the one Microsoft defined (one of your questions.) [b]ndiswrapper[/b] translates between the Linux kernel driver API and the current Microsoft driver ABI so that Linux systems can use MS drivers.

            Problem solved.
            Yagotta B. Kidding
      • Answer....

        "API, ABI, call it what you will, but this is semantic."

        Sorry John, but when an API changes; code must be rewritten. By contrast, when an ABI changes, code must merely be re-compiled. The two terms are not comparable functionally. Most of the arguments you make of how difficult binary driver code maintenance must be, for a stable Linux kernel version, fall aside when you consider this simple fact. Recompiling isn't that tough for the developer.

        Moreover, even your revised statement isn't accurate. There is a standard for the ABI. It was adopted 6 years ago as part of the LSB. However, it assumes a certain amount of openness of code - the interface (shim, if you will) must be recompiled to hard link into the new jump points within the kernel. Get it John? It's not the standard specification that changes, but the jump data that is organized by the standard that changes.

        What you seem to want is the values in the kernel never to change, which means that Linux kernel development has to stop so that nothing ever moves. Basically you argue "don't ever update the kernel." Sorry, ain't ever gonna happen bucko.

        Now, if the Linux kernel were microkernel architected; there'd be another way, but since the LK isn't, any argument in that direction is rather moot. ABI matching is one of the unfortunate frailties of the monolithic kernel, just as message lock is one of the frailties of all microkernels.
        John Le'Brecage
        • Okay...

          Let's say that all you have to do is recompile the shim code. Great. This is something developers can do, but non-technical users won't (or can't). Therefore, if the ABI changes (requiring a recompilation of the shim), who does that on behalf of the people who don't want to recompile anything? The distribution vendor, or the vendor of the device driver?

          Likewise, who defines the interface of the thing wrapped by the shim? The distribution vendor, or is there something defined as part of the LSB?

          [i]What you seem to want is the values in the kernel never to change, which means that Linux kernel development has to stop so that nothing ever moves.[/i]

          No, I'm not asking for that. I'm looking for some consistency that will let device driver vendors write once and not worry about it for...awhile. Windows changes its device interface as well. A new model exists for Vista. However, the old model existed for quite awhile, and that certainly helped to make life simple for device vendors who wished to target the Windows platform.
          John Carroll
          • Don't forget the consumers

            Consistency helps desktop consumers too, because they can continue to use their old devices with the new hardware (given that the slot/port remains the same). What most manufacturers tend to do is only support the last one or two OS releases (at least on Windows) with a particular model of product. After that the consumer may have to buy a new model in order to get drivers for a new OS.
            Mark Miller