I recently got my home workstation environment nicely aligned with a three-way split between Windows 7, Mac OS-X and Linpus Linux. Quite apart from sharing processing load and being able to segment home and work life more effectively, it allows me to try and out more web pages on more browsers and (if I am honest) looks pretty cool too.
Of course, there are solid code development reasons for wanting a multi-screen set up. For example, would you find it useful to have one screen tracking a software change and configuration management (SCM or SCCM) tool and another showing your as-it-happens development environment?
Author of the Coding Horror blog Jeff Atwood wrote some time ago now that, ”Multiple monitors [are] most useful when an application has palettes or when two or three windows need to be open, such as for programming/debugging.”
One might logically suppose that multiple screens would be fairly mandatory in extreme programming environments, so just how many do you need? Developers working on multifaceted GUI related development projects may even want to break up and individually view parts of the total user interface if it is large, complex and/or intensely rich in nature.
Of course some games developers will need multiple-screens from the outset as they may be programming for game play options that allow additional screens to support camera views and/or partial ‘surround-view’ with the use of three or even four screens built to be housed in a row.
Going back to my reference to change management and the use of multiple screens, SCM vendor PureCM points out that if you’re writing code or fixing bugs in, say, Visual Studio – then you’ll be wanting to view your SCM tool separately to reduce the possibility of introducing errors caused by switching from one to the other.
PureCM’s technical director Mike Shepherd is on the record with the following statement, “The roots of using two screens go back a few years. Looking specifically at the Visual Studio environment, Microsoft provided an SCM interface to give access to source control. While this provided a common interface to many external programs and tools, its use was limited in scope. The SCM interface route doesn’t sit easily with more modern development concepts like task-based development and refactoring, making the two-screen approach an ergonomic compromise with the potential to introduce issues.”
OK so this might sound like the most obvious subject in the world to talk about as so many of us are using more than one screen now, but hey – there’s a technical justification to be made here and sound reasoning to back it up isn’t there?
The argument that it ought to be easier to work with your main development tool and have your SCM tool seamlessly integrated with it must hold water surely? This way you can perform tasks such as managing changesets, merging changes and getting line-by-line histories to find bugs more directly within Visual Studio itself.
Right that's it for today, now to plan the integration of my xBox 360 Live somewhere into my blossoming hedgerow of electronic windows.