X
Business

Day 2: Eclipse on Mars

NASA scientists and mission planners are using the Eclipse RCP to control current and future Mars missions.
Written by Ed Burnette, Contributor

(Jeff Norris et. al. gave a presentation in the Theater on NASA Mission Operations with the Eclipse Rich Client Platform. -Ed)

DSCN0378.JPG

The Eclipse RCP is like a Launch Vehicle. It provides you with a component architecture, extensible GUI, and common application capabilities. Our expertise is on spacecraft, not on the software package. We want to thanks the entire Eclipse community, because we've found this to be a great asset to our work.

Technologies to look for: Hibernate, JMS, SQL databases, SWT, JAI, Eclipse Forms, GEF, Draw2D, JEP, Castor/XML.

(Mark Powell took the stage:) I'm a mission scientist for Spirit and Opportunity. The rovers send back hundreds of images every day. When you come in you have to sift through this, determine what the rovers are doing, where they are, and plan where to go next. The search capabilities are really important due to the value of data, and are built with Eclipse Forms.

Accessing Results is done through a Custom GEF table because the stock table widget was insufficient. The Results view provides thumbnails, allows search narrowing, and paginates results. 

Behind the scenes, it uses mySQL 5 (for certain others, HSQLDB and Postgres). Using Hibernate we can just use objects and help provide database agnostic access.

(next they gave a live demo of the Results view showing real data from the Spirit collected yesterday)

Now that I've found my data I need to manipulate them using SWT Imaging. We use Image ,ImageData, PaletteData, ImageLoader. We scale, rotate, and translate using the GC, and use alpha transparency.

For more heavy duty processing we use AWT Imaging, such as BufferedImage, BufferedImageOp, ConvolveOp (transform pixel values using local window and weights), and LookupOp (replace pixel values from table).

If you want to get fancier, you need some more heavy duty support. That's an example of what the Java Advanced Imaging library can give you. You can defer execution of image operations and only compute parts of the image that are needed for display. It's very scalable, you can take arbitrary images and break them up and work on a piece at a time.  It also has a very extensible I/O library that handles a lot of stuff like metadata (position, orientation, time, etc.).

For anybody who wants to write something like Photoshop in Eclipse RCP (which is essentially what we're doing), here are some tips. There are lots of snippets about converting AWT into SWT but you don't have to do that, you can wrap the pixel arrays to reuse data and save memory. 8-bit grayscale images are no problem, use an IndexedColorModel and put 256 levels of gray. If it's 24-bit, define pixels in the order of Blue-Green-Red (BGR) which is what SWT expects. You can use JAI to read in an image in any supported format and reorder its color bands if it is Red-Green-Blue (RGB). Reuse the raster from an AWT Image.

(Live demo of contrast stretch (seeing things in shadow) and reachability map (using 3d stereo imaging to show things you can reach in green. Another demo of a real time stitched multi-image composite)

SWT 3.2 has more OpenGL 3D graphics like JOGL and LWJGL. Folks up in Ames have been busy using this in their Viz product, a 3d planetary exploration and browse tool. (Viz explorer was demoed, showing a 3d fly-thru view of the surface of Mars).

(Ken Rabe took the stage next:) We use a lot of GEF. For example for tools for measurement and navigation, rulers for compass headings, and layer for targets and other annotations.

One of the big advantages of the OSS community is that all the source is available, for example if the ruler does inches/pixels/cm but you need azimuth/elivation you can take your own copy and manipulate it yourself. 

One of our big things we're working with is Distributed Operations. We've got scientists, not just at JPL but throughout the world. They need all the data themselves to be able to do all the science. This isn't easy when it's all housed at JPL behind several firewalls. In the new software, all users are treated the same. Everyone is a distributed user. (I saw Scott Lewis of ECF pigeonholeing them after the talk -Ed).

To create a target, the user clicks on a point. We make a call to a servlet, which returns an x,y,z location. This helps reduce the amount of data sent to the client. When they decide they really want to do something with it, they assign a name to the location visible to everyone else. There's a time crunch every day to get everything done. When a new client starts up they get all the targets everbody has already defined. When a new target is created, it's saved in the SQL database, and a JMS notification is sent out to sync everybody up. 

Plans are defined to tell the rover what to do next. They're stored in a database via Hibernate. The UI uses GEF and Eclipse Forms. For building the daily plan, the rover wakes up, gets its bearings, do some work, take a nap (charge batteries), do more work, and then go to sleep for the night. In Eclipse, this shows up in multi-page editor and a Details view.

Behind the scenes, as people are editing this plan, when they save changes to the db everyone finds out through JMS. (Jeff then gave a demo of the Plan editor.) The editor keeps track of the energy budget and bandwidth budget.

(Mike McCurdy from Ames went next). We do the scheduling and validation tools. Once the science team decides what they want to do, they need to schedule the activities. We want to do as much as we can without breaking the spacecraft, such as draining the battery, filling up the flash memory, driving with the arm extended, etc.. This is a critical and complex problem. Although the rover missions made many advances, future missiosna re even more ambitious. More missions are planned in 2007 and 2009. The current Eclipse-based solution is well positioned to be at least an order of magnitude faster than previous tools.

The planning editor has a Merge part where the basic plans are entered, and a timeline part (extendable GEF implementation with rich imteractive feedback). It's shows an animated timeline that shows any problem Users can directly manipulate activities on the timeline. Edits are automatically sent via XML/RCP to a remote service called Europa (a C++ binary). Europa evaluates the network of constraints, checks against a software model for flight rule violations. Then the client displays feedback in a human readable format. This cycle continues until all violations are cleared in the Science Planning Interface (SPIFe) client.

 I'm a member of the Human Computer Interactions group at Ames. We learned many important things in this project. For example, a lot of time is spent on seemingly trivial tasks. The former incarnation of the planning tools where asking a simple question about what's the gap between two activities was a fairly arduous process. In the new tool we use a "smart cursor" and status bar showing the exact values for gaps. The second thing was that there was an automated planner behind the scenes designed to force a consistent state at all time, so the user could never move things so that they were invalid. This got in the way, so they changed to a passive enforcement mode that simply flags violations, make suggestions that users can chose to ignore (temporarily).

(Mike started a demo of the planner). We're looking at a real plan from the Spirit. We can see the time accurate to the second at the top. On the bottom, there's a constraint view, like one activity can't happen earlier than a certain time (for example a lighting condition is needed) or an engineering condition. You can select between passive mode and constrained mode. In constrained mode the tool won't let you move activities to invalid places. Dependent activities are dragged with the event being moved. Note we participate fully in the Eclipse RCP framework, including undo/redo, and retargetable actions, and so forth.

(At this point the presenters ran out of time, but they had a crowd around them eagerly asking questions long after the talk was over). 

Editorial standards