Solaris vs AIX: It's the technology

Solaris vs AIX: It's the technology

Summary: AIX works, but so did System VR4 on NCR - and, hardware updating aside, that's just about how it compares to Solaris 10.

SHARE:
TOPICS: Hardware
33

From a technical perspective, of course, the right way to compare AIX to Solaris is to look at the technology. That might seem like it should be a lot of work but a long article by IBM's Shiv Dutta: AIX 5L Version 5.3: What's in it for you? on the developer works site largely eliminates the need, or at least the incentive, for doing it.

Dutta's article is about listing improvements from AIX 5L V5.2 to V5.3 and contains 441 instances of <li> - most of them signaling individually listed AIX improvements.

To quote 13 representative examples:

  1. Increased inter-process communication (IPC) limits In AIX, the individual IPC data structures are allocated and deallocated as needed, so memory requirements depend on the current system usage. Prior AIX releases defined the maximum number of semaphore IDs, shared memory segment IDs, and message queue IDs to be 131072 (128 K) for the 64-bit kernel. To cope with anticipated future scalability demands, AIX 5L Version 5.3 increases the maximum number of data structures for each of the IPC identifier types to 1048576 (1024 K).
  2. Thread support in gmon.out When applications consisting of multiple steps in which different executables (all built with -p or -pg flags to generate profiling information) are invoked in a sequence, each executable causes the previous gmon.out file to be overwritten. In AIX 5L Version 5.3, the gmon.out file has been made thread-safe so that each thread from a multi-threaded application has its data in it.
  3. DBX malloc command Malloc debugging features have been integrated into DBX command. This would allow a developer to query the current state of the malloc subsystem without having to create complex, unwieldy scripts requiring internal knowledge of the malloc subsystem.
  4. tcpdump upgrade to latest level The tcpdump command has been upgraded to Version 3.8. As a consequence of this upgrade, iptrace and ipreport were also changed to use the new upgraded libcap library (Version 0.8) for packet capture and dump reading. AIX tcpdump, prior to AIX 5L Version 5.3, displayed packet timestamps down to 1ns (10-9 s). The open source tcpdump displays timestamps at 10-6s. The new AIX tcpdump has 10-6s timestamp resolution. A number of new flags have been added to tcpdump. Also, a total of 87 protocol printers have been included to facilitate printing when using tcpdump.
  5. Volume group pbuf pools In previous AIX releases, the pbuf pool was a system-wide resource. In AIX 5L Version 5.3, the Logical Volume Manager (LVM) assigns and manages one pbuf pool per volume group. Version 5.3 has introduced the lvmo command, which can be used to display pbuf and blocked I/O statistics as well as the settings for pbuf tunables.
  6. Scalable volume groups AIX 5L Version 5.3 offers a new volume group type called scalable volume group (VG). The scalable VG can accommodate a maximum of 1024 physical volumes and raises the limit for the number of logical volumes (LVs) to 4096. The maximum number of physical partitions (PPs) is no longer defined on a per disk basis, but applies to the entire VG. The scalable VG can hold up to 2,097,152 (2048 K) PPs. The range of the PP size starts at 1 MB and goes up to 131,072 (128 GB), which is more than two orders of magnitude above the 1024 (1 GB) maximum available in AIX 5L Version 5.2.
  7. Variable logical track group AIX 5L Version 5.2 accepted logical track group (LTG) values of 128 KB, 256 KB, 512 KB, and 1024 KB. To support larger sizes of many disks and better disk I/O performance, AIX 5L Version 5.3 accepts values of 128 KB, 256 KB, 512 KB, 1 MB, 2 MB, 4 MB, 8 MB, and 16 MB for the LTG size. Version 5.3 also allows the stripe size of an LV to be larger than the LTG size in use and extends support for stripe sizes for 2 MB, 4 MB, 8 MB, 16 MB, 32 MB, 64 MB, and 128 MB to complement the 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, 128 KB, 256KB, 512 KB, and 1 MB options available in prior releases of AIX.
  8. Striped column support for LVs In previous AIX releases, you could enlarge the size of a striped LV as long as enough PPs were available within the group of disks which defined the RAID disk array. Also, rebuilding the entire LV was the only way to expand a striped LV beyond the hard limits imposed by the disk capacities. To overcome the disadvantages of this rather time-consuming procedure, AIX 5L Version 5.3 introduces the concept of striped columns for LVs. In prior releases of AIX, it was not permitted to configure a striped LV with an upper bound larger than the stripe width. In Version 5.3, the upper bound can be a multiple of the stripe width. One set of disks, as determined by the stripe width, is considered as one striped column. If you use the extendlv command to extend a striped LV beyond the physical limits of the first striped column, an entire new set of disks will be used to fulfill the allocation request for additional logical partitions, as long as you stay within the upper bound limit. The -u flag of the chlv, extendlv, and mklvcopy commands now allow the upper bound to be a multiple of the stripe width.
  9. The performance monitoring tool introduced in AIX 5L Version 5.3 called procmon displays a dynamic, sorted list of processes and information about them. It allows execution of basic administration commands such as kill, renice, and svmon on these processes. The procmon tool is an Eclipse plug-in and has been mentioned under Application development section. The command to start the tool is called perfwb (/usr/bin/perfwb). This launches Eclipse with the procmon plug-in. The perfwb command is contained in the fileset bos.perf.gtools.perfwb.
  10. In the previous versions of AIX, there were no tools available to monitor the AIO (Asynchronous I/O). In Version 5.3, the performance kernel libraries have been modified to obtain AIO statistics. The enhanced iostat command can be used as well to monitor AIO statistics.
  11. A new flag has been added to the tar command, which would specify the list of files and/or directories to be excluded from the tar file being created, extracted, or listed.
  12. Flags have been added to the tar command to process a directory of files recursively. An option has also been added to specify an input file for tar extraction much like that can be used for tar creation.
  13. Search highlighting has been added for the more command. When matching a search pattern, all matches of the search pattern are now highlighted. Highlighting is the default; new '-H' option disables highlighting. 'H' can also be used as a subcommand in an active 'more' session to toggle highlighting on or off.

Although AIX Version 6.0 is either into, or just about ready for, its beta release, 5L V5.3 is the current production toolset - and changes like those shown for iostats, tar, more, dbx, and many other common tools hardly play catch up ball to SuSe 7.1, Solaris 2.5.1, or even HP-UX 11.

This doesn't mean V5.3 isn't better than V5.2 - if you've ever struggled with a database engine, like Sybase 11, requiring manual extent assignment 5.3's lifting of the 1GB partition limit will provide magical relief. Unfortunately, however, that effect only applies if you're coming from 5.2 because if you used any other Unix you wouldn't have seen this kind of problem since the early nineties - and that's really the bottom line on the entire comparison: the improvements to AIX -all that volume management stuff, for example- mostly invoke that same deja vue feeling from ten and fifteen years ago.

So what makes AIX, AIX? Two things: first a focus on controls making what Oracle does with ACLs look attractive, and secondly the typical mainframer's obsession with processor based systems virtualization.

Thus (to continue quoting Dutta's work) System V 5.3L has many new features like:

  1. Disk quotas support for JFS2 AIX 5L Version 5.3 extends the JFS2 functionality by implementing disk usage quotas to control usage of persistent storage. Disk quotas might be set for individual users or groups on a per file system basis.Version 5.3 also introduces the concept of Limit Classes. It allows the configuration of per file system limits, provides a method to remove old or stale quota records, and offers comprehensive support through dedicated SMIT panels. It also provides a method to define a set of hard and soft disk block and file allocation limits and the grace periods before the soft limit becomes enforced as the hard limit.

    The quota support for JFS2 and JFS can be used on the same system.

  2. Micro-Partitioning: Allows a single processor to be shared by up to 10 partitions and supports up to 254 such partitions.
  3. Virtual I/O: Supports the I/O needs of client partitions (AIX® and Linux®) without having to dedicate separate I/O slots for network connections and storage devices for each client partition. You can boot and run the partitions from Virtual SCSI devices and achieve network connections using the Virtual Ethernet and Shared Ethernet Adapter.
  4. Shared Ethernet Adapter (SEA) Failover: Provides Shared Ethernet Adapter High Availability by offering the ability to create a backup SEA on a different Virtual I/O server that will bridge, should the primary SEA become inactive.
  5. SMT: Version 5.3 supports the SMT mode of POWER5 processors. When you enable this mode, a single physical POWER5 processor appears to the operating system to be two logical processors, independent of the partition type. A partition with one dedicated processor would behave as a logical 2-way by default. A shared partition with two virtual processors would behave as a logical 4-way by default. You can turn the mode on or off for a specific partition either immediately or on a subsequent boot of the system.

Notice here that what IBM means by virtualization is exactly the opposite of what Sun means and that this therefore presents an absolutely basic distinction between the ideas embedded in the two architectures.

When IBM says "virtualization" it means the 1960s idea of breaking a multi-million dollar processor complex into independently managed chunks. In contrast the Unix idea of virtualization focuses on processes to make them manageable independently of the hardware.

Thus virtualization in the IBM sense increases complexity while Unix virtualization reduces it - and IBM's approach lets you break one physical machine into multiple virtual ones while the Unix ideas embedded in Sun products like N1 allow you to treat many small machines as one larger one.

Cut to the bottom line and what you have is Sun pursuing the second generation Unix ideas embedded in Plan9: in which the network is the computer and all resources are accessible by everyone; while IBM is still selling pre-Unix VM ideas for protecting one application from another on the same hardware.

See: Part 3

Topic: Hardware

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

33 comments
Log in or register to join the discussion
  • and this is bad?

    IBM's approach is designed so that you can take all the applications, jobs, and users currently running on a lot of machines and put them on a single machine. A single machine that is far easier to maintain than many smaller systems spread through out a building or site.

    Having all resources accessible by everyone is not necessarily a good thing. Think security. IBM's approach keeps the students out of the grades database down at State U.

    Labeling IBM's approach as old and Sun's approach as new does not mean Sun's approach is better. In fact, I think IBM's approach is superior.
    CattleProd
    • A majority would agree with you

      Most people think the whole divide and monitor business is great - but I don't and most Unix people don't because it's anti-thetical to the whole business of building communities and extending user capabilities and control.
      murph_z
      • Partially right

        There are cases where a full virtualization makes sense (not zones/workload
        partition), eg. when different operating systems/releases are needed, or different
        kernel patch levels. The later because of incompetent managers who are affraid of
        too much dependencies.

        A former manager of mine did choose IBM's LPAR technology over Solaris' Zones
        for a consolidation project. This was the last nail in the coffin, so I left the
        company.

        If effective usage of ressources is wanted, LPARs are not the right solution. IBM
        won't tell you (I asked about it!), but every additional LPAR will add a performance
        penalty. This comes from the context switching overhead.

        Sun's LDOMs are much more efficient as only full HW threads are assigned to
        LDOMs.

        Live migration of LPARs is one feature where IBM is ahead, but Sun will deliver this
        feature next year for LDOMs.
        Burana
        • LPAR overhead

          I was in a similar situation. However as angry as I was when we brought IBM in, i'm glad I stayed. Having Solaris and AIX experience will only increase your marketablity. Neither one is going away anytime soon.

          The context switching overhead is very low on LPAR, and (i believe) non-existant if you are using dedicated CPU resources. IE CPU not being allocated via a shared pool. I just sat through a whole day on optimizing SystemP to minimize overhead. AIX is also aware of unused VPROCs and uses a 'folding' technique to force thread affinity to as few real processors as possible. This maximizes dispach on different processors across LPARS, minimizing context switching.
          civikminded
          • Dedicated CPU resources

            If I want dedicated CPU resources I can as well switch to Sun's physical domains, where I have electrical separation.

            The point is, with LDOMs (micropartitions) you will loose a lot of performance that you paid for.

            I really have enough know-how in AIX (hell, I worked for IBM!) to decide for _me_ that specialising on Solaris is more fun.
            Burana
          • Well You might...

            Have worked for IBM, but you sure don't understand micropartitioning, but then again most IBM sales people I've
            meet don't either, and it's their job to sell it :)=
            It's a bit complex, but also extremely powerful.

            LDOMS is more like Dynamic LPAR (DLPAR), you can remove/add
            logical cores to a partition/Domain, but a logical core is
            fixed entity. A logical core is what comes from the coarse grained multithreading used on Niagara. Hence you'll see
            32/64 logical cores on a T1/T2. So if you want more/less
            computational power you add/remove a logical core.

            In micropartitioning (SPLPAR is a the better name) the layer is
            called physical core,virtual core, and logical core.

            What you do is you assign a suitable number of virtual cores
            to a partition, and you then assign a number of 10th of physical
            cores to serve these virtual cores.
            Hence if I make a partition with 6 virtual cores and assign 3.6
            physical cores to it. I'll have a partition with 6 cores where
            each core corresponds to 0.6 physical core. If I only run
            a workload that can saturate 3 cores, each core will be able
            to use a whole physical core. The left over 0.6 is free to be
            used by partitions, that run in uncapped mode.

            If I then choose to turn on SMT Simultaneous Multi threading on
            my partition I'll see 12 cores, but will then also harvest the
            benefit of SMT.

            When running in uncapped mode a partition is able to get
            unused processing power, so that each virtual processor can
            grow to a whole physical processor.
            Hence our partition from the example can get up to 6 whole
            physical cores of processing power, if it wants it and nobody
            else is using that processing power.
            So the whole trick is to fit partitions that have as different a workload pattern as possible on one physical machine as possible.

            So if you have:
            4 x 4 cores boxes that run online from 8 am - 6 pm at 50% util.
            4 x 4 core boxes that run batch 1 from 3 pm - 9 pm at 80% util
            8 x 2 core boxes that run batch 2 from 6 pm - 4 am at 50% util.
            2 x 4 core boxes that run backup from 6 am - 8 am at 60% util.

            (56 cores put to gether)
            you can actually with SPLPAR on your machine, all this on two
            11 core boxes.

            My own first reaction to reading about micropartitioning/SPLPAR
            (Shared Pool LPAR), was the catch must be the overhead.
            But I quickly realised that because you can use unused
            processing resources from all the other partitions that run on
            the same machine.. that the overhead is negative.

            // Jesper
            JesperFrimann
          • Utilization

            I know how micro-partitioning works...

            IBM wants to sell you this box as a consolidation platform (many physical boxes onto one).

            More non-idle partitions (better utilization!) means more context-switches which means less bang for the buck.

            Compared to Zones/LDOM (almost no overhead!) this means you get more value out of Sun HW/Solaris, with less complexity (your words, not mine).
            Burana
  • At it again...

    [i]
    "Although AIX Version 6.0 is either into, or just about ready for, its beta release,
    5L V5.3 is the current production toolset - and changes like those shown for iostats,
    tar, more, dbx, and many other common tools hardly play catch up ball to SuSe 7.1,
    Solaris 2.5.1, or even HP-UX 11."
    [/i]
    So you list some tweeks that you have selected from a long list
    of changes and then conclude that AIX 5.3 is not even playing catch up to
    other Unix'es and Linux ?

    I don't seen any features or functions compared, so how can you conclude that ?

    [i]
    "if you?ve ever struggled with a database engine, like Sybase 11,
    requiring manual extent assignment 5.3?s lifting of the 1GB partition limit will provide magical relief"
    [/i]

    The Sybase 11 limit you talk about is not one that I've ever
    heard, or have been able to get any hits on google or on IBM's
    Technical Help Database for AIX.
    Sounds like a 32 bit vs 64 bit kernel issue. On AIX 5.2 you
    could either run a 32 bit or 64 bit kernel, where the 32 bit
    had some limits on the number of segments for a process.
    Basically all you had to do was to boot the 64 bit kernel, to
    get around that limit.

    The main change from 5.2-> 5.3 IMHO was the whole micropartition
    and SMT support, which was a huge step, and a totally different
    way of thinking for us Unix people.

    [i]
    "Notice here that what IBM means by virtualization is exactly the opposite
    of what Sun means and that this therefore presents an absolutely
    basic distinction between the ideas embedded in the two architectures."
    [/i]

    WRONG, if you actually followed where SUN is going, they are
    talking about hypervisor and LDoms not only for Niagara but also
    for Rock.
    Try for example to look at this story,
    http://www.itjungle.com/tug/tug101906-story03.html

    And in AIX version 6 U'll have Workload partitions, which will
    be AIX'es version of Solaris containers.

    So basically the UNIX leaders (IMHO IBM and SUN)
    are going in the same direction, SUN is doing virtualization
    and IBM is doing containers. You just haven't discovered it.

    Actually I think we UNIX people need to give the mainframe some
    credit with regards to virtualization. Statements like this
    comes to mind:

    "Sorry, we didn't know better."
    "Yes you were right all along."
    "Thank you for letting us use your technology."
    "And please can we borrow one of you oldtimers to teach our
    Young Linux guys some things"

    And it's hard for a hardcore Unix old timer like myself, who
    have been battling Mainframes for 15 years admit that I didn't know
    how good virtualization on the mainframe was ;)

    But back to your post.

    [i]
    When IBM says ?virtualization? it means the 1960s idea of breaking a
    multi-million dollar processor complex into independently managed
    chunks. In contrast the Unix idea of virtualization focuses on processes
    to make them manageable independently of the hardware.
    [/i]

    You are so wrong, and you cannot even see it. A processor on
    a virtulized POWER5/6 machine is a virtual processor independent
    of the hardware. You can resize it, add more of them etc.
    independently of the hardware. The same goes for memory, and
    disks, as have it have been in all Unixes for years.
    Furthermore things like Workload partitions, more or less the
    same as Solaris Containers, let you move workloads across
    physical machines. And on POWER6 you'll have partition migration
    where you can move partitions across physical machines, like vm
    move on wmware.
    Furthermore even the cheapest pSeries box can be partitioned,
    for a license that is cheaper than VMware.

    [i]
    "Thus virtualization in the IBM sense increases complexity while
    Unix virtualization reduces it - and IBM?s approach lets you break
    one physical machine into multiple virtual ones while the Unix ideas
    embedded in Sun products like N1 allow you to treat many small
    machines as one larger one.
    [/i]

    Nahh.. partitioning a server do not make things more complex.
    You don't get more OS instances that you had before, by partitioning.
    There is a balance between partitioning a machine and
    consolidating workload into one partition, where you might want
    to impose some varying degree of separation. That you can do
    by using either a 'solaris container like function' or a
    'workload manager like functionality'.

    Both Solaris on sparc and AIX on power have rich functionality
    to support this.
    Also by virtualizing you get the ability to overbook your
    hardware like airlines do planes. So rather than having tree 4-way
    separate physical machines, one doing batch, one doing
    Database and one doing Application, you might be able to fit it
    all intoa single four way server, if the workloads don't peak at the
    same time. Now that's why WMware and the POWER hypervisor do
    save people a shitload of money on hardware.

    [i]
    Cut to the bottom line and what you have is Sun pursuing the second
    generation Unix ideas embedded in Plan9: in which the network is the
    computer and all resources are accessible by everyone; while IBM
    is still selling pre-Unix VM ideas for protecting one application from
    another on the same hardware.
    [/i]
    Nahh, Solaris is going the same way as AIX, You just haven't
    seen it. AIX is also making all resources accessible across
    machines with technology like workload partitions.
    And if you actually had used VM back then and AIX on a pSeries
    today, you would know that there is a big difference.

    So to sum up.

    You totally fail to compare Solaris to AIX. All you do is
    list some cherry picked changes from AIX 5.2 to 5.3 and then
    make fun of them. Then you try to make a claim that AIX is
    moving in the total other direction of other UNIX'es when it
    comes to virtualization, when it's actual in front of
    the others, due to the fact that IBM have been reusing
    Mainframe knowledge and developers, and both SUN and HP are
    trying eagerly to catch up.

    This blog entry is even worse than your last, it doesn't bode
    well for the next one.....

    // Jesper
    JesperFrimann
    • umm, dead wrong on everything, I think

      Yes Solaris/Sparc does support partitioning, has for years. But ldoms are ldums - put in to sell to people who think this stuff is critical but not conceptually part of Unix.

      Containers are - they're an upgrade on the users/groups business.

      Basically one unites, the other divides.
      murph_z
      • Please explain

        If he is dead wrong please explain why, instead of inane statements like "Containers are - they're an upgrade on the users/groups business." and "Basically one unites, the other divides."

        Its becoming increasing clear that you have no real intention of exploring deep dive technical issues or any real type of comparison between Solaris and AIX, which is a real shame. ZDNET feels the need to throw the UNIX folks a flamebait blogger in lieu of any real discussions on UNIX issues. I have made a career of administering both Solaris and AIX and they are both world-class platforms.

        Why dont you rename your posts reflecting your chosen path? At least be intellectually honest about what you are doing.
        civikminded
        • Murph is a Sun zealot.........

          You have to read Murph's blogs with a grain of salt. Murph is a Sun zealot and sees the world through "sunny" glasses.

          I have used both IBM and Sun workstations, and personally, I find AIX much more productive in most cases. And apparently, so do a LOT of others too.
          linux for me
        • uhuh

          Volume management on AIX is world class? Which world, Ceres?
          murph_z
          • Management

            I've used both platforms and management, in general, is more difficult on Solaris. There are things in AIX that can be done with a few keystrokes in SMITTY or even better with the new Console (try the AIX 6 Beta) that are insanely annoying on Solaris. I'm a longtime Unix user and just because I know how to modify files and navigate scripts doesn't mean that I want to do all that just to change my IP address. I also find Solaris' device naming structure (especially on network devices) really annoying. Just because I know how to play with things under the covers means that I want to do it for (what should be) simple tasks.
            unxguy
      • How about some respect Murph?

        Most of the posters read your rants and then put some work into commenting on them (not me of course). How about you address the arguments rather than "dead wrong on everything I think". All this is showing is that you really don't have the technical knowledge in this area to do any sort of comparison.

        But then lack of knowledge has never stopped you before ;-)
        tonymcs1
      • Hmm..

        Sorry, you aren't being serious. And it sounds
        like You haven't really understood virtualization.

        // Jesper
        JesperFrimann
      • Virtualization

        Your obvious bias makes it difficult to take your observations seriously. You downplay the benefits of virtual machines, while HP and Sun have made attempts to catch IBM in this area (see Sun's announcements about LDOMs and xVM; HP's about Integrity Virtual Machines and pursuing a type-1 hypervisor). I guess VMWare, Microsoft, the Linux community with Xen and KVM, and IBM all have this approach wrong, huh? Somebody should tell VMWare, they should know that their market cap of > $40 billion and nearly 90% YTY rev increases are based on technology that nobody really needs.

        There is a place for software partitioning and it's a very useful technology, but high-end workloads like financial databases is not the place to rely on a single kernel to not crash under multiple environments. And you might want to look into it, Containers don't provide ALL of the features of a standalone Solaris installation. There are limitations. I will grant you, though, that the branded zones technology looks very attractive to some existing users of older versions of Solaris.

        "Basically one unites, the other divides."

        So, consolidating 10-12 physical hardware systems down to one is dividing them? I'm not sure I see that. Besides, AIX 6 is scheduled to come out next month with WPARs (similar to containers, except with some upgrades like WPAR mobility and better management interface) so the point will be moot. AIX/POWER will offer users the choice of software partitioning or virtual machines.

        I don't understand the comment about being "conceptually part of Unix"? I'd wager that most customers could care less about how conceptually faithful the operating systems are to what eventually become arcane concepts anyway, they would probably use a hamster in a wheel to run their business if it provided the desired reliability and performance at acceptable costs.
        unxguy
  • 1GB PP limit

    Please explain this 1 GB physical partition limit in greater detail and its implications. You do know LVM terminology right? You DO know what 'PP' means in AIX LVM right?

    (HINT: 1gb limit has nothing to do with logical volume size, but I'm trying to get you to do a little teensy bit of research)
    civikminded
    • Ever try running raw devices on AIX before 5.3?

      I have, and it wasn't fun.
      murph_z
      • Yes...

        Unless you are talking ancient AIX 4.3. It's more
        or less equal hard on all the 5L's.
        And come to think of it, it's actually hard on
        any OS.

        // Jesper
        JesperFrimann
  • OK

    What does Sun have for volume management? You have to BUY VxVM becasue Solstace is unusuable. And no, IMO ZFS is not ready for enterprise adoption.
    civikminded