None of this had been difficult for Mouse. These were exactly the sorts of evaluations and decisions that the robot made best. The problem was mud. Mouse didn’t know anything about mud, and that lack of knowledge was interfering with the robot’s ability to make a critical decision.
The next phase of the mission required Mouse to identify a loop-shaped fixture on the upper surface of Presumptive Target #1. Mouse had located a fixture near the designated area of the object. Using physical location as a primary criterion, the fixture was a high-confidence match, after correcting for variations in spatial orientation. Presumptive Target #1 had approximately 12.5 degrees of y-axis rotation and 30.0 degrees of z-axis rotation, but — corrected for that — the candidate fixture had a location confidence factor of 99.8 %.
The problem lay in the shape of the fixture. Mouse’s mission program queue contained a detailed digital model of the fixture the robot was programmed to identify. And the fixture at the specified location was not a good match for the model, only 41.2 %. It was in the correct place and was roughly the correct size, but it was not the correct shape.
The fixture had a name. It was a lifting shackle. And, as Mouse had been advised, it was loop-shaped. Unfortunately, the lifting shackle was packed with mud, compliments of its encounter with the sticky sediments of the Aleutian Island slope during the accident that had put the submersible Nereus on the bottom of the ocean.
Had Mouse known about the mud, the robot could have completed its identification of the lifting shackle, and moved on to the next phase of its mission: locating the lifting cable that the Research Vessel Otis Barton had lowered into the ocean. The phase after that, connecting the Otis Barton’s cable to the Nereus’s lifting shackle, would not be difficult at all. But to get there, Mouse had to correctly identify the lifting shackle, hiding under several layers of Aleutian Island mud.
Mouse wasn’t aware of any of these things. It had not been told that the fixture had a name. It didn’t know that Presumptive Target #1 was the deep water submersible Nereus, or that the three human beings inside the submersible were either dead or dying. Mouse didn’t even know what a deep water submersible was. The robot only knew that it was at the correct geographic coordinates, hovering five meters away from an object that closely matched its search criteria, evaluating a candidate fixture that was not the correct physical shape.
The situational-response algorithms built into Mouse’s core programming decided to examine the puzzling fixture from another position. With measured surges from its maneuvering thrusters, the robot moved ten meters to the East, and swung its nose a corresponding amount to the left, so that it faced the object from a different angle.
When the maneuver was complete, the candidate fixture was once again centered in the cone of light cast by Mouse’s sealed Halogen lamps. Satisfied with its new position, the robot studied the illuminated fixture through a pair of high-resolution video cameras. The results were no more satisfactory. The fixture was still the wrong shape.
Once again, the situational-response algorithms did their work. The computer shut off the robot’s Halogen lamps to minimize optical interference, and triggered its LIDAR scanner for a more detailed look at the improperly-shaped object. Short for Light Imaging Detection And Ranging, LIDAR was similar to radar, except that it transmitted and received low-intensity laser light instead of microwaves. The LIDAR scanner mounted on the upper leading edge of the robot’s hull directed a burst of laser light toward the presumptive target. In the space of one second, the scanner emitted 400 pulses of high-frequency laser light in a clockwise reticulated-rosette scanning pattern, and recorded and evaluated the resulting reflections.
The wavelength of the laser was tuned to 495 nanometers, in the blue-green band of the optical spectrum, the frequencies least likely to be refracted and absorbed by water. The individual laser transmissions were timed so closely together that a human eye could not have distinguished them as discrete events. A human observer — had one been present — would have seen only a second of flickering blue-green light.
The LIDAR scanner completed its transmission sequence. The laser went dark, and the Halogen lamps snapped back on to provide illumination for the video cameras as the robot processed and assembled images from the laser scan.
The detailed LIDAR images revealed nothing new. The candidate fixture was still the wrong shape.
The cognitive architecture that formed the core of Mouse’s operating program was designed to continue functioning in the event of one or more logical failures. In computer-speak, this concept was called fault tolerance, or graceful degradation. Had the graceful degradation software been correctly coded, Mouse would have been able to override the programming conflict and continue its mission. But there was a bug in the program code. When the graceful degradation loop was triggered by an error, it was supposed to activate a subroutine to record the nature of the mistake for future correction, and then bypass the error to continue functioning. Instead, the faulty program activated the emergency maintenance subroutine, erroneously informing the robot that it had sustained critical damage, and ordering it to return to the surface for repair. This was the software bug that Mouse’s programmer, Ann Roark, had been chasing, and it wasn’t corrected yet.
Faced with an insolvable logical conflict — this fixture on the upper surface of Presumptive Target #1 must be the one specified, but this fixture cannot be the one specified — Mouse’s core program activated the graceful degradation routine. The faulty software responded by triggering the emergency maintenance subroutine.
The robot’s computer immediately noted the damage signal and prepared to abandon its mission and head for the coordinates it had been launched from.
Without Ann Roark’s middle-of-the-night tinkering, the rescue of Nereus would have ended there. But Ann, in a burst of desperate and bleary-eyed wisdom, had crafted a slight modification in Mouse’s program code. The patch didn’t fix the problem because Ann Roark still hadn’t found the bug that was causing the problem. This was a different type of programming. This was a workaround.
In the lingo of programmers, a workaround is a temporary and usually imperfect way of forcing a computer to operate in spite of an uncorrected malfunction. A workaround does not repair a broken piece of program code, it merely tricks the computer into pretending that the problem doesn’t exist.
The workaround Ann Roark had patched into Mouse’s program had four simple elements: one conditional statement, and three commands:
...(1) <<<< IF [emergency_maintenance_routine = active]
(2) CANCEL [emergency_maintenance_routine]
(3) RESUME [normal_operation]
(4) INVERT [last_logical_conflict] >>>>
The first line of the patch triggered the workaround as soon as Mouse’s computer went into emergency maintenance mode. The second and third lines of the patch canceled the call for emergency maintenance mode, and ordered the robot to continue operating as if no error had been received. The last line of the patch did the important work; it inverted the results of the logical conflict that had caused the error in the first place.
Mouse had been stymied by the fact that the fixture it had located did not match the shape of the digital model stored in computer memory. The code patch inverted that logical state, changing “INCORRECT SHAPE” to “CORRECT SHAPE” in Mouse’s memory.
The logical conflict was resolved. Mouse’s computer determined that all conditions had now been met for this phase of the mission. The robot moved on to the next phase and began searching for the Otis Barton’s lifting cable.
Fifteen minutes later, three thousand feet above Mouse’s position and two miles to the south, a small triangular green icon appeared on the screen of Ann Roark’s laptop computer. Ann yawned so hard that her ears popped, and she thumbed the trackball, scrolling the computer’s pointer over the new symbol. A tight block of letters and numbers appeared next to the icon.
It took a few seconds for her tired eyes to focus on the tiny status report. She read it, and then she read it again. And then she screamed at the top of her lungs.
She jumped out of her chair and clasped her hands over her head like a prizefighter celebrating a victory by knockout. “Yes!” she shrieked. “Yes, damn it! YES!”
She turned around and locked eyes with the first of the Navy geeks who caught her attention. “Call the Otis Barton,” she said. “Tell them to start hauling in their cable!”
The Navy guy, whatever his name was, looked stunned. “Does this mean …”
“Yes!” Ann shouted again. “It means that mama’s little mouse is bringing home the cheese!”
The Research Vessel Otis Barton rode easily on the waves, the white paint of her hull and superstructure gleaming in the midday sun. Originally constructed as a Victorious Class acoustic surveillance ship for the United States Navy, the squat little vessel had been retired from military service and reconfigured for marine research by NOAA, the National Oceanic and Atmospheric Administration.