Back

Introduction to Reality Computing

The use of digital models, information, and workflows for product and project development is a well-established expected practice in many industries. Technologies for computer-aided design (CAD), computer-aided manufacturing (CAM), digital prototyping, and building information modeling (BIM) have already changed many design and delivery processes.

This use of digital information about the physical world is now laying the groundwork for another transformation: the integration of digital design environments and the physical world. Technologies to capture information about the physical world, manipulate and analyze that information digitally, and actualize the result back into the physical world are combining to enable new ways of working. These new technologies for “Reality Computing” are already improving workflows across industries that design, produce, or manage physical products or projects—from jet engines to highways to running shoes.

What is Reality Computing?

For anyone engaged in the design, delivery, or management of physical things, Reality Computing is a vision for how technologies are breaking down the barriers between the physical and digital worlds.

Since the advent of digital recordings and audio CDs, music enthusiasts have been copying music tracks, editing or reformatting them to suit their needs, and publishing the results for other people or other devices. Colloquially known as “rip, mix, and burn,” this practice completely upended the music industry.

Reality Computing is the same idea for data capture, creation, and delivery:

  • capturing reality in a digital form,
  • using that reality data to digitally create simulations, designs, and other information,
  • then delivering the results back to the real world.

Virtual model-based designs that feed construction, fabrication, and manufacturing processes are commonplace across today’s industries. But many model-based design workflows begin with a digital blank slate. Any design context—be it the wing of a jet or the environs of a new road—is then modeled using geometry to represent spatial information. For example, the ”existing conditions” of a building renovation project are usually created manually by modeling the existing building from archived drawings, supplemented with surveyed or scanned data for important measurements as needed. Anecdotally, this ratio (of modeled geometry to captured reality data) is roughly 90 percent versus 10 percent. As a result, design teams are spending hours, days, even months on modeling current reality before they ever get to the design phase of a project.

New technologies are now enabling the direct capture of spatial information about the physical world for integration into design processes. Design context is moving from geometry (modeled representations of the physical world) to this captured reality data. Reality Computing helps design teams improve design accuracy and accommodate physical-world conditions through customized fabrication of a design that is shaped to fit precisely with real-world conditions and environments. Furthermore, there is a technology explosion occurring for how digital projects or products can be realized in the physical world, from 3D printing and machine-controlled grading to augmented reality devices.

Figure 1: Reality Computing enables the direct capture of spatial information about the real world, use of that data in design processes, and delivery back to the physical world. Reality Computing technologies were used to carry out the I-95 New Haven (Connecticut) Harbor Crossing Corridor Improvement Program, pictured here in a digital rendering of the proposed infrastructure improvements. Image courtesy of Parsons Brinckerhoff

Fundamentals of Reality Computing

Reality Computing is a high-level concept that integrates the digital and physical worlds, bringing together many products and technologies to digitally capture existing conditions, use that information to digitally create designs, simulations, and other new information, and then deliver a physical representation of the results.

Capture

The ways information can be captured digitally from the physical world are increasing every day. Capture technologies—from laser scanning and point survey to photogrammetry and ground-penetrating radar—coupled with plummeting prices are promoting both ease of use and accessibility. For example, Figure 2 below illustrates the development of 3D scanning, including the introduction of the first commercial 3D laser scanning systems for the AEC industry. However, the systems were complicated and very expensive, limiting their market penetration, whereas today laser scanning is a staple of infrastructure and land development projects. Intel’s announcement at the 2014 Consumer Electronics Show (CES) that it will start building RealSense 3D camera technology into its product lines is another example of how 3D scanning technology is becoming commonplace.

Figure 2: The first commercial 3D laser scanning systems for the AEC industry were complicated and expensive, limiting their market penetration. Today, the value and cost of laser scanning has made its use commonplace.
Source: Client Guide to 3D Scanning and Data Capture, BIM Task Group, 2013

Timeline of 3D Scanning Development

Consider when mobile phones first started to include cameras. Initially, the cameras were used as expected—to capture still photographs. Who would have imagined that today you can use that built-in camera technology to deposit checks, measure your heart rate, or translate a street sign in a foreign language? Similarly, as 3D scanning technologies (and the reality data they produce) become more available and more established, the derivative applications and integrations with consumer and commercial tools will follow—further broadening the reach of 3D scanning.

This captured reality data is generally represented as high-density point clouds, which are very different from the descriptive geometry that design software uses today. The use of captured point cloud data in a 3D design application usually requires some sort of pre-processing. Processing typically involves registering different scans within a common coordinate system and then georeferencing that point cloud to a project’s existing coordinate system. Moreover, the size of raw point cloud files can reach hundreds of gigabytes for large projects—rendering them almost impossible to work with in a modeling environment. Preprocessing software helps users visualize and work with these massive datasets.

Create

Once “reality” is captured and processed, the next step of Reality Computing is the ability to operate on the digital, real-world information. This may involve editing it to filter erroneous or unwanted data, manipulating it into new designs, adding new model information around it, analyzing it for new information, or using it to simulate real-world behavior or perform clash detection. Depending on the application, some teams may take the optional step of creating surface meshes or 3D solids from some or all of the scanned or surveyed data.

3D design software can be used to manipulate and directly interact with the reality-captured data. For example, a civil engineer can import 3D laser scans of a congested roadway intersection into road design software as a real-world reference for early planning efforts to redesign the intersection. A damaged bracket on a military aircraft in a combat zone can be scanned and the reality-captured data uploaded to a manufacturing facility across the world where the data is imported into mechanical design software, the damaged portion is digitally repaired, manufactured, and replacement part is shipped back to repair the aircraft.

Figure 3: Reality Computing enables manufacturers to use metrology technology combined with feature recognition software to create 3D solid models for quality inspection, reverse engineering, and so forth.

In addition, enabling technology for segmentation and feature recognition of reality data allow designers to interact with point clouds and high density meshes in more intuitive, object-like ways. Manufacturers can use metrology technology—both laser scanning and contact-based coordinate measuring machines (CMM)—combined with feature recognition software to convert point cloud and contact-probe data of manufactured components into 3D solid models. These models can then be used for a variety of purposes such as quality inspection during the manufacturing process or for reverse engineering. Similarly, feature recognition software helps civil engineers manipulate point clouds of existing terrain or infrastructure as an object rather than a collection of points. For example, specialized feature recognition software can automatically identify relevant features from point clouds, such as bridges, signs, and streetlights in scans of highways corridors.

The ability to import, visualize, and edit reality-captured data can also help streamline ‘scan to BIM’ processes. Scanned data of a building can serve as a reference to create or validate a building model used as a starting point for the building’s renovation. Scans of a newly poured concrete slab can be imported into a 3D design model of a new building (that contains the digital design of the slab) to perform deviation analysis—highlighting high and low areas that need adjustment. Scanned point cloud data of an existing facility can be combined with digital models representing new equipment or renovated spaces for project coordination and clash detection.

Deliver

The last leg of Reality Computing is the delivery of captured and modified reality data back in the physical world. This can be accomplished digitally (using project visualizations or augmented reality) or physically (using 3D printing, machine-controlled earthworks, and other digital fabrication techniques).

Digital Realization

Figure 4: Model-based project visualizations help project teams depict their projects in the context of actual surroundings, such as this rendering of the Shanghai Tower. Image courtesy of Shanghai Tower Construction and Development Co., Ltd. Rendering by Gensler.

The ability to produce and present high-quality images or animations of a new building or a consumer product can sometimes serve as a replacement for product prototypes or scaled-down physical project models. Project visualizations can be particularly important on large projects where sheer size and complexity make it difficult to fully convey designs using traditional engineering drawings. Model-based project visualizations that include reality data help project teams depict the project in the context of actual surroundings, making it easier for clients, project stakeholders, and the public to understand the project.

Inexpensive mobile devices and the growing field of wearable computing devices that layer digital information directly onto your visual field will soon make augmented reality presentations routine. The popularity of tablets and smartphones have already resulted in countless consumer-level augmented reality apps, like pointing your iPad at the night sky to see overlaid star charts, or using your smartphone camera to superimpose ATM or subway locations.

Sure to follow will be an explosion of commercial augmented reality technology. Shop floor supervisors and facility maintenance managers can already point their tablet at the QR code on a piece of equipment to access manuals, warranties, preventive maintenance schedules, and work histories. Construction workers already use automated laser surveying instruments to map coordinates from digital models directly onto construction in progress to ensure that steel anchor bolts were installed according to the design and within tolerance.

Physical Realization

For decades, the manufacturing industry has used digital design models and computer-aided manufacturing (CAM) techniques to support digital manufacturing—generating digital information to control robotic assembly, CNC (computer numerical control) milling, laser cutting machines, and so forth. More recently, AEC firms are physically realizing digital designs at the civil infrastructure scale by robotically sculpting landforms through machine-controlled grading and using GPS-guided paving machines.

A more recent development for physically materializing digital objects is additive or 3D printing, a technology currently experiencing substantial price reductions. Similar to the improvement of 2D printers (from dot matrix and ink jet printers to color laser printers), the speed, quality, and versatility of 3D printers is increasing while costs are decreasing.

In the commercial world, 3D printing is a well-established technology used by manufacturers for cost-effective prototyping, mold-making, and small-scale production. Car companies are 3D printing full-size prototypes of a car body for aerodynamic testing. Manufacturers of military parts use 3D printing to quickly and remotely produce customized replacement parts for military equipment. Industrial designers developing consumer products use 3D-printed prototypes to examine the aesthetic and functional appeal of their product designs.

Consumer applications for 3D printing are still the domain of hobbyists, but already enthusiasts can buy action figures with their own likeness. Imagine a household that designs, downloads, and prints its own products—from custom toys, jewelry, or iPod cases to foods such as pasta and chocolates.

Value of Reality Computing

Change

The comparison of reality-captured data over time can yield important information about progress, deterioration, or movement. Reality Computing helps teams monitor construction progress to evaluate subcontractor productivity and quality, measure changes of a building’s critical structural supports while a tunnel is bored below it, or identify a new object on an airfield that may pose a security risk.

Location

Reality Computing enables processes to be accomplished digitally (and remotely) that otherwise would have to be accomplished in the physical presence of the site or object. For example, a team of construction experts in Chicago, working on a high-rise project being built in Singapore, study scanned data of a building’s structural system to assess crane placement—without having to fly to the construction site. Researchers, curators, educators, and the general public use web-based interactive tools to digitally view, study, interact, and manipulate objects from museum collections—without having to physically see them or touch them.

Knowledge

Reality Computing allows analyses and simulations to be performed digitally rather than on the actual object or system. Industrial designers use 3D-printed components to test a product’s form, usability, and ergonomics. Engineers use models derived from scanned buildings to perform energy analyses and determine a facility’s carbon footprint. Civil engineers create cinematic-quality project animations to support project approval and public outreach efforts for large (sometimes contentious) infrastructure projects.

Figure 5: Reality Computing enables engineers to use digital project models derived from scanned buildings to perform energy analyses.

Specificity

"For more than a century, modern manufacturing has been defined by the maxim that complexity and uniqueness are expensive; in other words, it’s cheaper to produce a large volume of the same thing at a lower unit price. But digital realization is beginning to break that industry touchstone. Digital fabrication, 3D printing, and related technologies are enabling designers to move directly from a digital model to a finished physical object. As a result, complexity and uniqueness have become cheap and manufacturing is being democratized."

— Construction Executive, September 2013
Digital Dreaming by Dominic Thasarathar

By recreating and accommodating physical real-world conditions, Reality Computing is helping designers move directly from a digital model to a finished physical object via custom fabrication and construction processes. Examples include creating brackets to precisely position a curtain wall system on a less precise structural frame, or 3D printing a perfectly fitting dress or jacket.

Fidelity

Reality Computing can replace complicated, failure-prone adjustments to fit an object to an exact physical condition. Reality Computing allows designers to accurately reflect the physical world (existing or as-built conditions) in their design model for coordination and manufacturing/construction planning. For example, “driving” a virtual model of a new car body through the digital point cloud of an existing automotive assembly line to verify production; or scanning and importing the existing abutments that will support a replacement bridge.

Industry Ramifications

The Reality Computing examples mentioned above are just a few high-level illustrations. Below are some more specific examples of how forward-thinking organizations across industries are using Reality Computing technologies. In these examples, organizations are capturing physical information digitally using a variety of different technologies. They are working directly with this captured reality data using digital tools that do not require them to turn the scanned data into modeled geometry. Additionally, they are putting that data back to work in the physical world through 3D printing or other numerically controlled production or visualization technologies.

Building Construction and Maintenance

McCarthy Building Companies is saving hundreds of thousands of dollars on the Kaiser Permanente Oakland Medical Center by not modeling as-built conditions. Instead, they are laser scanning the construction at key milestones for a higher level of detail and accuracy than would ever be possible in an as-built 3D model. If they (or the hospital’s facility group) need to go back into a completed section of the hospital for additional work, these scans will help them perform ‘arthroscopic surgery’ on the facility—using a hole saw to access the services in the wall instead of closing down an operating or patient room to open up the wall.

Stiles Corporation used Reality Computing to coordinate the installation of an 8.5-ton chiller during the renovation of a performing arts center in Florida. To limit the center’s downtime, the firm used scanned reality data of the facility’s existing mechanical room and the access hallway, combined with a digital design model of the chiller, to perform 4D clash detection and carefully plan the movement of the new unit.

Infrastructure Construction

Terrametrix paired mobile LiDAR scanning with powerful feature recognition technology for a survey of more than 7,200 bridges for the California Department of Transportation. This resulted in increased safety for the survey workers and a sizeable reduction in survey information turnaround time: from one month to one day.

Weaver-Bailey Contractors has started to replace conventional, manual survey and control methods in their highway construction business with grading and paving equipment controlled directly by digital design files, GPS, and laser control systems. By forming the physical highway directly from digital information on a six-mile stretch of Highway 67 in Arkansas, they expect a 40 percent reduction in labor and reduced material overruns from as much as 30 percent to 2 percent or less.

Automotive Factory Design and Manufacturing

Figure 6: Automotive manufacturers use reality scanned data to capture existing conditions of factories and assembly lines. Image courtesy of Volvo Car Group.

Volvo Cars no longer tries to maintain as-built geometrical models of the manual assembly cells for their plants because the models are expensive to develop, immediately out of date, and lose the rich detail captured with the reality data. Instead, they scan existing assembly cells and the resulting point cloud is their digital plant. Rather than interrupting production to move physical mockups through an assembly line, Volvo Cars uses a combination of reality-captured and modeled data (of new assembly components and car bodies) to simulate and verify production processes for new models.

Aerospace Manufacturing

After thousands of hours of operation, the individual blades in a turbine or jet engine are worn but in balance with the rest of the fan. Businesses that specialize in the maintenance, repair, and overhaul (MRO) of this kind of equipment are often required to repair a damaged blade. But the work must match the worn state of the original, and therefore the original design models cannot be used. Instead, they scan the existing blade and use numerically controlled machines and specialized welding techniques to repair areas to the original, worn contours.

Consumer Manufacturing

Nike engineers modified a computer-controlled sweater knitting machine to manufacture the upper portion of one of their newest products—the lightweight Nike Flyknit. As the shoe is digitally stitched, the materials can be modified to alter the shoe’s strength or flexibility.

Align Technology’s Invisalign system uses digital modeling software and 3D printing for mass customization of clear, removable aligners that are used to straighten teeth. The system uses x-rays, pictures, and impressions—and more recently 3D digital scans—of a patient's teeth and mouth to create a digital model. This model is used to develop a treatment plan and produce 3D-printed custom-fit orthodontic appliances.

Healthcare

Healthcare providers and manufacturers use Reality Computing to produce medical solutions customized to individual patients such as orthodontic braces (mentioned above), orthopedic insoles, hearing aids, and hip implants. Doctors are also starting to use Reality Computing to create 3D-printed replicas of anatomy that help them plan and prepare for difficult operations. For example, a team of pediatric heart surgeons in Kentucky studied a polymer 3D-printed replica of a 14-month-old baby’s heart prior to surgery to repair the child’s heart defects.

In addition, 3D scanning and printing is being used to create custom surgical implants. For example, doctors in Wales successfully performed reconstructive surgery on a man disfigured in a motorcycle accident. CT scans were used to create a 3D-printed skull used for surgical planning, surgical cutting guides used during the operation, and a medical-grade titanium implant to hold the bones in their new shape.

Future Scenarios

Reality Computing represents a huge potential across multiple industries and as technology evolves, so will uses within and across industries. For example, consumers can already customize their Nike Flyknit shoes by choosing different color combinations for various parts of the shoes (upper, sole, laces, even the Nike swoosh). But in the near future, one can imagine a completely customized shoe that is contoured to an athlete’s foot. You visit your local Nike store, have your feet laser scanned, and get a custom-sized custom-made pair of shoes printed for you while you wait. Longer term, maybe consumers will have their own household 3D printer to fabricate items such as custom clothing, toys, even replacement parts for household appliances.

Figure 7: The use of Reality Computing to support augmented reality will enable municipal and utility workers to visualize underground utilities superimposed over real images of a street. Image courtesy of VTN.

Within the construction industry, the production use of augmented reality is just on the horizon—where utility workers and excavators will have ground-penetrating scanner devices hooked into equipment monitors (or even Google Glasses) displaying georeferenced 3D models of the underground utilities superimposed over live images of the construction area.

The use of Reality Computing to produce medical solutions customized to individual patients is already happening (see examples above). Currently, medical researchers are investigating ways to 3D print actual soft-tissue organs—a finger or a kidney for example—that will not be rejected by a patient’s body.

For industrial manufacturing, military personnel in combat zones or technicians in remote areas are already scanning equipment and machinery parts to digitally repair and fabricate a new part. In the near future, these groups could carry small portable 3D printers to produce the part on the spot and repair technicians using Google Glasses could view information related to the equipment (such as animated repair instructions) to quickly get the equipment back in service within days or even hours.

Reality Computing may even be destined for space one day. A professor at the University of Southern California has developed a layered fabrication technology for automating the construction of whole structures as well as sub-components. Single or multiple houses can be 3D printed, complete with conduits for utilities. “Contour Crafting” was initially seen as a way to quickly construct emergency concrete housing or to build concrete structures in locations such as Hawaii where concrete must be imported. But NASA is considering the technology to robotically print an airport on the moon or habitats on Mars.

Conclusion

“Most AEC firms laser scan existing conditions for use for design development on a renovation project. But we’re doing it during the course of construction—creating as-builts as we go, and using that as a component within our change-management process. This has allowed us to mitigate the impacts of changes, and has helped to keep us on schedule and within budget.”

—Chris Pechacek
Virtual Design and Construction Director
McCarthy Building Companies

Reality Computing—from capturing reality data and digitally creating new information, to the physical or visual delivery of digital information in the physical world—represents an information platform shift for the design, production, and management of physical things.

While the constituent technologies of Reality Computing have developed independently, they are increasingly being integrated into new workflows by organizations and teams across a range of industries to improve performance and better serve customers and clients.

Back