F E A T U R E S    Issue 2.05 - May 1996

Lights, Cameron, Action

By Paula Parisi

It's about imagination, says Jim Cameron, not technology. After handling puppetry and effects for B-movie king Roger Corman, he mastered the techniques in bringing The Terminator and Aliens to the screen. Then, in The Abyss, he had a vision that traditional techniques could not realise, and turned instead to silicon: there was simply no other way to make water do what he wanted.

Since then his determination to conjure up ever more fantastical images has led him to keep pushing at the limits of what's possible for digital film. In the spring of 1993, Cameron launched his own solid-state effects house, Digital Domain, with partners IBM, character creator Stan Winston, and former Industrial Light and Magic (ILM) chief Scott Ross. The shop cut its teeth on effects for Interview with the Vampire, Apollo 13 and Cameron's own True Lies. The director recently tried his hand at a 3D short-film attraction, Terminator 2 3-D, for Universal Studios Florida.

Under the banner of his production company, Lightstorm Entertainment, Cameron is currently contemplating such projects as Spiderman and an adaptation of Anne Rice's The Mummy. His next release is set to be Titanic, a retelling of the maritime disaster incorporating actual on-site footage of the ship, shot last autumn with underwater cameras modified for the purpose by Cameron and his brother, a mechanical engineer. Cameron describes Titanic as "a four-hankie love story". With effects, right? "Big effects - but subtle."

Wired: You have no formal film training and at university gravitated toward the sciences, which is probably why you have a knack for mixing science and entertainment.

Cameron: I've always seen film as a technological medium, so it's not that strange. I started in marine biology - then I found out how much money marine biologists made. [Laughs.] What I wanted to do was go explore the deep ocean, and as a marine biologist you wind up counting salmon eggs in some hatchery. So I switched to physics, and I was pretty fascinated for a couple of years. But my math wasn't really strong enough to be a physicist who was going to push the envelope, so I decided I ought to get into something more artistic.

And now, full circle, the Titanic takes you out to sea. You just got back from a second-unit shoot. Where in the Atlantic were you, exactly?

About 800 miles from the coast of Nova Scotia, east-southeast of Saint John, Newfoundland. No helicopter could get out to us - we had to get our dailies from a tanker pilot. He'd fly over and airdrop them by parachute into a Zodiac, and we set up a projector on the ship to view them. We were there 21 days. You dive every other day, basically.

What was it like to dive the Titanic?

The descent took two and a half hours. It's about 2.5 miles under the ocean, and the pressure is 6,000 pounds per square inch. The pressure on the front dome of the camera housing is 1.2 million pounds. We went inside the ship with a little robot video and got some incredible footage, things people had dreamed about but never seen. You see the inside of the Titanic: the furniture, the panelling, the chandeliers - it's all still there. To see the Titanic sitting there on the ocean floor 83 years after it sank is quite awesome.

You've said that part of what appealed to you about the Titanic was the whole aspect of technology and man's disillusionment with it.

Not only disillusionment, but our tendency even beyond disillusionment still to put our faith in it. I can draw a pretty good parallel between the faith that they put in a ship like the Titanic in 1912 - an almost unshakable faith that science and technology would deliver us to a better age and a better life, with no penalty paid - and the current hype and hysteria about the information superhighway and how computing is going to deliver us to a brave new world. Which it probably will, by the way, but it's not without risks, and I think people have to be very heads up about the risks.

It's rapidly approaching the point where all films will have a computerised component, if they don't already. That's something you pioneered.

Not single-handedly, but I was lucky enough to be surfing that wave when it first broke. There was something happening that I don't think anyone could push forward or hold back - there was an opportunity to ride that wave at exactly the right time. We took that opportunity on The Abyss, we took it again on Terminator 2, and we took it a third time to found Digital Domain. But before I became involved in computer graphics, there was a good ten- or fifteen-year history of pioneering work, with people writing the code necessary to do 3D imaging and figuring out how to do polygonal modelling and early inverse kinematics. It was all being done, but it was being done in rarified environments, at universities and in the R&D labs of big software companies. It hadn't reached the artists yet, per se. It hadn't cross-pollinated into the film industry, which had both the art and the money to make it a broad cultural phenomenon. But everybody had wanted it to happen, everybody had been trying. There were fledgling efforts, in films like Tron and The Last Starfighter. There was even a little bit of liquid-metal morphing in a film called The Navigator. It was composited optically, not digitally, and was in kind of a crude manner, but it was effective.

From a film-maker's perspective, though, I was pioneering. It took a leap of faith - and a greater leap of faith on Terminator 2 than on The Abyss, because on The Abyss the computer was really used to solve a single sequence, and if that sequence had failed, the film still would have succeeded dramatically. On Terminator 2, the success or failure of the film was really predicated on the success or failure of the digital technique. The great leap of faith is that we were ready, or could risk a US$90-plus million negative - which is a pretty high investment - on a group of people at ILM who couldn't guarantee that they could do what I wanted them to but said, "We think we're ready."

What's the situation now?

Anything is possible right now if you throw enough money at it, or enough time. We have the right tools, or we can combine tools, to do anything. That doesn't mean that it's easy, that it's straightforward, that it's intuitive, or that it's cost effective. The goals within the next ten years are to make the interface with the film-maker more intuitive and easier to use, to bring costs down and to create a cohesive field out of all these disparate tools. We're doing that at Digital Domain. It's in the best interests of all the major digital production facilities to do that.

What happens next?

I think the next big hurdle will be to do a feature-length, fully animated computer graphics (CG) film that's photorealistic or even a feature project that has a significant amount of photorealistic animation in it. Take Jurassic Park - when you cull out the dinosaur shots done using CG, as opposed to the ones using Stan Winston's full-scale animatronic dinosaurs, I think it comprises in the neighbourhood of five to six minutes of film. The next step will be to get it into the 20- to 30-minute range in a feature project and really make it a film-making tool in a broader sense, in the way that Disney went from doing shorts to doing feature-length films in the '30s and '40s. Then there's Toy Story, which isn't photorealistic at all.

Photorealistic means different things to different people, but if you have a fanciful subject that's modelled as if it's a real thing, then you're bordering on photorealism. Toy Story falls into that category. It's a little fanciful in its style, but all the surfaces look real, and the lighting looks real. It has a cartoon-like art direction, with a photorealistic rendering. That makes it surreal - it kind of plays around with your visual cortex, although we are tending to get inured to the kind of classic computer-animation look now.

A lot of film-makers are trying to capture the virtual reality experience on film. I think it's a big mistake. I'm talking about virtual reality using overtly artificial-looking computer animation. On a number of pictures a lot of money has been spent on really expensive virtual reality scenes, but the audience goes in knowing they're in an artificial environment, so they don't credit the work. In T2 and Jurassic Park, computer animation was being used to solve a real-world photographic problem, and so the audience didn't question the reality of the images. Film is inherently kind of not real, and the films that succeed best are the ones that start by creating a world or characters or whatever that say: this is real, this is real - and they keep coming at you every moment the actors are working, and every bit of production design is trying to underline in red that it's real. The moment you start playing with virtual reality, the audience knows that what they're seeing is not real, so you've sort of violated one of the most powerful things about film - that ability to create an alternate reality. Maybe you have 10% of the audience fascinated by the images, but the other 90% has shut down. That's my theory on why those films appeal to such a narrow demographic. The other thing is that film doesn't address what's great about virtual reality, which is the interactivity: you move and, as you move, the scene changes. In a film, you have none of the upside - none of the interactivity, none of the control.

But do you believe in VR as an entertainment?

Absolutely. I love it, and I'd support a virtual reality project spun out of one of my film projects, but I wouldn't focus my time and attention on it. I'm more interested in linear narrative than I am in the multiplicity of narrative possibilities you need in a virtual environment.

Films like T2 could easily trigger a lot of other entertainments, one example being the new 3D theme-park attraction for Universal Studios Florida, Terminator 2 3-D.

For me it falls more into the category of, if we were going to generate a videogame from Terminator, I'd want to be involved to the extent that I wouldn't want it to violate what is known about that universe. But I don't think I'd want to sit day in and day out and help design it.

Speaking of T2 3-D, why did you go with 65 mm 3D? A 3D camera in such a large format must have been the size of a washing machine, making it incredibly difficult to shoot in your signature moving camera style.

That's one of the things that made it interesting. It's also one of the things that made it nearly impossible to shoot. One of the more ambitious 3D projects done previously was Muppet*Vision 3D, and I don't mean that to sound pejorative - Muppet*Vision is a lot of fun - but you have basically a lot more of a static proscenium presentation, where the 3D shots are 20 to 30 seconds long. On T2 3-D, we were doing rapid cutting and rapid camera movement and trying to keep in the vein of the Terminator films. And you can't move a 450-pound camera rapidly. Well, we figured out ways to do it, but it sure was a nightmare. The difficulty of doing 65 mm versus 35 mm was not just sort of twice, it was an order of magnitude: it was ten times greater.

You seem to enjoy anthropomorphising machinery. Is that something audiences relate to?

It fits stylistically into the story: the antagonist in the first Terminator film and the protagonist in the second are human-seeming machines. But in order for a machine to have an attitude, even if that attitude is pure malevolence, it has to seem somehow alive and not just be a toaster. It doesn't have to be so much anthropomorphic as it has to seem like a being, an entity - and all that really takes is giving it two eyes. Though actually, in Aliens one of the things that's most terrifying about the alien is that it has no eyes. All it is is a mouth, a living mouth, which was a brave stylistic concept, I thought.

I heard that T2 3-D is running through digital, for tweaking and processing of images, adding in computer-generated imagery elements, the mini Hunter-Killer robots and so on. Is this how films will be made in the future?

Eventually. We can't afford it right now, but I was just saying in a meeting that I estimate it will be five years at the earliest and ten years at the latest before most movies are scanned end to end after filming.

And the benefits of this will be?

If you're cutting in a nonlinear system, and the resolution of the system is a little higher than it is right now, instead of doing negative cutting and all that sort of thing, you'll just do a digital assemble of the film. Then, you'll do all your colour timing and your final f/x composite work at a point where it's essentially all the same generation. So your dissolves and fades and titles and visual effects will all be done at the same generational level, and you'll spit out directly to a printing internegative. The beauty of that is you can take all your VistaVision and 70mm cameras and put 'em in a museum, because you're not going down an additional generation on all effects shots.

It's pretty cool. I think it's going to happen on big-money visual-effects films first. The next stage, which will cause it to become the norm for every film, is when we go to some sort of electronic cinema projection system, and that could be anywhere from five to fifteen years out. The second that all you need to do is pipe terabytes of data out to a theatre someplace, there's no point in ever seeing film. My guess is you'll still run out a printing internegative to ship foreign, because there will be a lag between domestic and foreign. Theatres in different countries are not necessarily all going to embrace electronic projection at the same time. But even just in the domestic marketplace, we spend about half a billion dollars a year, in the film industry collectively, on release prints.

Movie cameras are essentially a 100-year-old technology. Will the industry be switching to electronic capture or direct-to-disc any time soon?

I really don't know. It still seems like it hasn't made great inroads, though I've seen stuff done on high def that was pretty incredible. In fact, I saw a demonstration that kind of blew my mind - a piece of tape was recorded from Sony's HDC-500 analogue camera onto their big digital 7 recorder, which is about the size of a phone box. They took their resulting data, ran it through the electron-beam scanner, filmed it out to black-and-white separations, blew it up to IMAX size, then ran it on an IMAX screen, and I would say it looked a lot like IMAX. I've seen demonstrations of high def projected before, and it's always looked crappy A/B'd with film, but I would have to assume that most of the contamination of the image is happening at the projection stage right now, because the bandwidth of high-def capture is very high, much higher than I thought. I'd been told this by engineers for some time, but I'd just never seen tangible results before.

Now the problem is that it's a $3 million camera and a $5 million recorder, and the recorder weighs about 600 pounds. An Arri 2C can record an image just as well, but it weighs only 16 pounds and can cost $20,000, so there's still a big gulf between the two.

But I guess if you subscribe to Moore's Law, about processing speed doubling every 18 months, and factor in that costs will drop by half at the same time, then electronic imaging will become affordable and you'll be storing your images on disc instead of film, doing away with things like chemical processing.

Exactly, because there's a whole cumbersome mechanism downstream of initial capture with film that you don't have to the same degree with digital capture. One of the things we are looking at actively at Digital Domain is using a high-def camera on stage, because there you don't worry about your weight considerations, and it's not a hostile environment, so you're not going to damage expensive equipment. The advantage is you can take your data straight into the system, you don't have to wait for dailies, and you can get immediate feedback from a composite workstation, then say, You know what, I need it a little more from this angle.

I think we're going to do this on Titanic. We're looking into it right now, because I've got a lot of bulk-reducible composite elements that we're going to need - people walking on decks and things like that, and a lot of the stuff for the sinking, of bodies, people falling and stunts being done - that are going to have to be done in front of a green-screen and then composited with motion-control models.

So how much longer does film have as an initial-capture and storage medium?

I would guess it's indefinite, as a matter of choice on the part of the film-maker. Film-makers are artists, and different artists work in different mediums. Some people still like to shoot anamorphic, even though I think it's a dead format from a practical standpoint, but they just like the way light sources have these strange horizontal slashes of distortion across them, little things like that. It's just like a painter liking a Number 4 sable brush.

Now that we can program Cooke prime lens ratios into computers, maybe the film artifacts will be programmable as well.

That's true. You may be able to add some of that stuff back as a filter if you want it, and that's an area I've personally been fairly active in, encouraging our TDs and CG artists at Digital Domain to really think photographically. You're making objects up out of nothing, and you're doing virtual photography; essentially, you're trying to fob them off on the audience as something that was real. The whole sort of mental paradigm for Apollo 13 was, If you had a camera in your hands and you were floating and wearing a space suit and could pan the ship as it went by, what would it look like? So they spent a lot of time on edge halation, lens flare and that sort of thing, interactive lighting and all the photographic effects that are very subtle, but they add up to making it look real.

There's so much posturing in Hollywood among the creative community, and even some of the executives, that now it's hip to be a computer nerd. How digital is Hollywood, really?

It's getting very digital. Nobody's doing it just to be hip - the costs are too high. People are taking a good look at what they need. There are a multiplicity of platforms and software and systems available, and they all do different things. Nonlinear editing is a wave that's breaking in a huge way. I see the editors resisting it. I see the studios embracing it, often not for the right reasons: not because it enhances creativity, but because it reduces post-production time and costs. It's here, and everybody knows it. The important thing is not to overspend, given the rate at which systems become obsolete.

Are animators and computer jocks becoming stars in their own right?

Absolutely. In the old days, the software didn't exist; you basically had to write your own. Now you can buy everything. But what you can't replace is the trained eye, and the heart, of the artist. As much as computers have democratised information and computing, software still can't take the place of the artist's mind. You need people who not only have the soul of artists but are trained as artists, not as technicians. You need people who understand perspective, understand lighting, understand composition. What I see is a few good ones and a lot of mediocre ones, who may have basic skills in Flame or this renderer or that one. You've got superstars, and then a lot of people who are being overpaid; there are more jobs than there are people. Two or three years from now that's going to reverse, because the word is out and everybody knows they can get a job in Hollywood if they're good at this stuff. You're going to get a flood of university grads who are going to come knocking on our doors, and then it's going to be a buyer's market again.

Is there any one job in the chain that's most in demand?

We don't have enough great animators, clearly. To be a great computer animator, you need to understand cel animation, which is really about creating a character with a brush stroke or a pencil line. Computer animators tend to come from a school that says the character is a fixed shape. But any cel animator will tell you instantly that the character changes shape as it moves, as it hits a wall, as it takes off, as it leans forward: the neck stretches, the body elongates - it's a whole different philosophy. The great animators of the next decade are going to have a foot in both camps. They're going to understand that the technique isn't as important as the character, the story, the event and the dynamic of the specific event being animated. Which quite frankly I think Spaz - ILM's Steve Williams - nailed in The Mask. But he's unique.

What about at Digital Domain? You have about 50 animators on staff now.

Our animation capability is expanding rapidly, and there are a couple of film projects we're going to be doing in the near future. I think our animation department is going to triple or quadruple in size. We also have a lot of animators working on games, and that's a good place to bring them in and train them.

Computers are getting so inexpensive - there's a near-future prospect of buying a machine at a local computer store that can, although still pricey, in some ways approximate an SGI machine. Will digital "garage bands" change the business or the aesthetics of entertainment?

That's a pretty interesting idea. At Digital Domain we're actually using lower-cost platforms, just PCs basically, to do a lot of preliminary modelling, texture mapping and bump mapping before we ship it up to the SGIs, which have the higher cost-rate for animation.

If you have the sort of garage-band formula where you can sit there and you don't care if it takes two days to render, it's going to reach a point pretty soon where you can do most of what can be done at a high-end facility. But that isn't really going to change the basic paradigm for film-making, because film-making is about creating a kind of brand awareness for a film, and that takes stars, and it takes established film-makers. Anybody can go out there with a video camera or with inexpensive equipment and make a low-budget movie for a couple hundred thousand dollars. It doesn't mean it's going to get released, because to make any kind of impact in the marketplace now you need millions of dollars in an ad campaign. So it doesn't matter how cheaply something is done.

So that movement won't redefine the industry?

The thing that everybody fails to remember here is that when you're making a big effects film and you've got over 200 shots in the film, you've got issues of management, you've got issues of dealing with the day-to-day flow of the work, and you've got organisational problems. It's a little bit like saying, "With these tools I can go out in my backyard and build a car." Well, yeah, you might be able to build a car, but can you be Ford Motor Company? Can you build a thousand of them? I don't think so. That's why I don't feel Digital Domain is in any way threatened. I can go out and find the most brilliant guy working in his garage, and he wouldn't be able to service my needs on Titanic. He might be able to service his own needs, but he can't do what I need done.

George Lucas has said that he thinks digital delivery over nets and film-makers with their own file servers will revolutionise distribution.

I think they will. I think they will revolutionise an individual's ability to get the work seen, as opposed to having to work as an indentured servant for many years to get that opportunity, and in the meantime you've completely lost touch with your audience. [Laughs.] You're going to have people making short pieces - call them films, call them anything - sticking them out there, getting immediate feedback. And possibly you'll find a better crop of young film-makers to choose from, further down the line. But does it change the big smokestack film industry system? Probably not very much, simply because we have more films than we have people going to see films. You see good films dying on the vine, and other equally good films that are successful. What's the difference? Marketing, and the effectiveness of that marketing. That takes bucks and it takes know-how. And from the studio perspective, the know-how is hiring someone expensive to run your marketing division.

But can we say that digital's changed the culture of Hollywood? Or is it still the same today as it was before 2001?

[Laughs.] The improvement of effects has basically elevated fantastic film-making from B-level film-making to A-level film-making, and that happened in the '60s and '70s. Over time, it hasn't really changed the culture of Hollywood at its roots, but it has shown that in a way there's an alternative to the star system. Because in the '40s, you either had a movie star or you had a B-movie. Now you can create an A-level film with some kind of visual spectacle, where you cast good actors but you don't need an Arnold or a Sly or a Bruce or a Kevin to make it a viable film. Then you have the megamovies that do both.

Like your movies.

I like to give people their money's worth.

You've created one synthetic star, Terminator 2's T-1000 character - but the template was an actor, Robert Patrick.

And the performance, even when it was in its liquid-metal mode, was created by him. We did early motion capture on Robert to get the movements and the head gestures, the way he inclined his head when he listened. That's really the model for how it should continue to be done. We're putting a pretty heavy investment into motion capture at Digital Domain and into a new level of performance capture to get facial nuance. That's how I think synthetic characters should be driven - by actors. You get that spontaneous impulse of a character - as the actor would do it - and you continue to imbue that synthetic character with a sense of a human presence that's very specific to that actor.

Do you have any interest in working with actors who are, shall we say, no longer with us in the flesh?

No. I think one of the things that makes those actors precious is that they had a life. They meant something to us at a certain point in time, and they made great films. They burned like a meteor and went out. That's the way it should be. The idea that Marilyn Monroe could be working a thousand years from now doesn't appeal to me.

No actor has ever said to you, "How can I preserve myself...?"

I haven't been approached, though it's certainly plausible. I think this will be done in the future in the same way that diffusion filters are now used for leading ladies who are a little past their peak. I think it'll be possible to very seamlessly and very invisibly extend an actor's screen life by years and maybe even decades. Obviously, that's really an individual choice by an actor, just like they decide how much makeup they want and how they want to be lit. I don't see why the ability to lightly digitally alter an image should be considered any different from any of those other artifices. Hollywood is not about truth, it is about illusions.

Paula Parisi (pparisi@aol.com) covers technology for The Hollywood Reporter. She wrote about synthespians in Wired 1.08.

65 mm 3D Twin cameras capture separate image perspectives for the right and left eyes that, when projected simultaneously, simulate dimensional viewing. By nearly doubling the size of the film format from the normal 35 mm to 65 mm, Cameron more than quadrupled the bulk and weight of his camera system, to about 450 pounds. The upside: substantially better image quality.

Running through digital Typically, only sequences to which digital effects will be added would be scanned. For T2 3-D, every frame was scanned into the computer, so that even the frames without effects could be colour-calibrated and otherwise fine-tuned.

Digital assemble Essentially a digital edit. The computer offers access at a keystroke, a convenience that editors in the analogue era couldn't enjoy - they had to eyeball it through miles of film. Though digital effects get more ink, the widespread acceptance of digital editing is one of the major factors driving Hollywood toward an all-electronic production system. Currently, even no-effects films must be digitised if they are to be electronically edited on a state-of-the-art system. Favourites include Avid and Lightworks workstations, though you can buy software that allows you to edit on a Mac or PC.

Electronic cinema Hailed as "the future" amid the HDTV hype of the late '80s, electronic projection for movie theatres has been pushed to the back burner as it's become apparent just how difficult it is to equal film's quality at anything near a reasonable cost. The big challenge: creating an electro- nic projector capable of pumping out an image vibrant enough to be displayed at feature-film sizes. A major advantage would be the convenience of electronic distribution, sending films to theatres over land lines or by satellite rather than by courier.

High def Video cameras that offer 1,084 vertical lines and 1,920 horizontal lines of resolution, about two and a half times that of normal video. Rolled out in the late '80s, with an eye toward replacing existing video and television platforms rather than replacing film, the format was a big bust in Japan, the first country to offer it to consumers; the US$30,000 price tag on the TV sets turned off buyers despite a big high-def push from broadcaster NHK.

A/B'd An A/B comparison is, literally, when you run two pieces of film side by side at the same time, checking one against the other.

Initial capture What used to be called principal photography - shot on a set or on location - as opposed to digital capture, which involves porting images into a computer for manipulation.

Composite workstation A workstation for image compositing, the process of combining multiple layers into a single image. The layers are typically referred to as "elements". The elements could be anything from a plasma beam that shoots out of a ray gun to a new background to a CG character. Though most computers take a bit of time to crunch the data together, a new software product called Flame speeds the process nearly to real time, allowing even non-digital directors to sit with a computer jock for a very interactive effects edit session.

Motion-control models Computer-orchestrated camera shots.

Anamorphic Lenses used for films that will be presented in wide-screen (the better-known trademark name is CinemaScope), delivering an aspect ratio of 2.35:1. The format has been superseded in popularity by 1.85:1 filming (aka spherical or Academy aperture), which requires less trimming to reformat for television, the aspect ratio of which is 1.33:1.

Virtual photography The increasingly popular concept of the computer as camera, "photographing" geometry or wire- frames and working up a fully rendered object.

Nonlinear editing Random-access editing that removes the constraints of consecutive frames. See Digital assemble.

Machine at a local computer store A desktop setup such as the DEC Alpha PC, totalling roughly $50,000 with enhancements and software, can get you going with a pretty impressive film-making package. A souped-up Power Mac clone runs only $20,000 including software.

SGI machine Silicon Graphics Inc.'s favoured model among film-makers - the Indigo2 High IMPACT 10000, which with all the necessary additions will set you back about $115,000 - is configured to take advantage of the popular professional software packages.

Preliminary modelling, texture mapping and bump mapping Rough-draft effects work, good enough to preview how a final shot will look but not of the calibre necessary for insertion into the finished product. Typically, the same data base of information is used - it's simply ported from the lower-end machine to a high-end device before being output.

Digital delivery over nets Transmission of high-quality video over the Net will be possible when telephone and/or cable companies bump up to broadband capacity, loosening the chokehold television networks and motion-picture studios have on mass distribution of filmed entertainment product.

Film-makers with their own file servers A file server, for storing and regulating access to one's wares over the Net, will pave the way for mini entertainment conglomerates.

Motion capture Process by which a computer records the motions of a person or puppet outfitted with special sensors, so the motion data can then be used to animate something else, often a computer-generated character.

Facial nuance Motion-capture systems are designed for gross body movements, while the more intricate expressions of the face would typically be performed and recorded separately by a more sensitive facial-capture device. The data could either be incorporated into a single character, or used separately.

Digitally alter an image Flesh-and-blood actors are being digitally tweaked for everything from spot removal to "youthification" through hairline changes, the removal of facial wrinkles and the like. A moral leap from wigs and makeup? You decide.