Nuggets From
1996 Computer Game Developers Conference

Phil Davidson

April 16, 1996


  1. Authoring Tools Hits and Misses -- seminar led by Ms. Jamie Siglar. She maintains the Usenet's FAQ on multimedia authoring tools at
  2. Intel processor optimization principles. Intel preceded the CGDC with its own three-day seminar on its MMX extensions.
    1. MMX is a set of new instructions to be implemented on future Intel processors which will become common in 1997. The MMX instructions are designed for efficient handling of certain operations that are common in multimedia processing. Some of the MMX instructions can handle up to eight bytes of data at once.
      1. Sources of further information.
        1. For details about MMX, see
        2. For Intel optimization in general, see
    2. For intensive low-level optimization (on any Intel processor), Intel's Vtune utility is a wonderful help.
      1. It gives details about how adjacent instructions will dovetail in practice. It makes use of intimate knowledge of how the instructions delay one another.
      2. It can perform statistical samplings on a running program, to identify the hot spots down to the level of individual instructions. This can help identify larger issues, such as limitations that come from data misalignment or from the CPU cache.
    3. The tightest assembly-language loops are limited not by the CPU instructions but by other speed limitations: transferring data in and out of the CPU.
      1. Four bandwidth values are relevant:
        1. The speed of ALU (arithmetic/logical unit) operations within the CPU.
        2. The speed of data transfers between the CPU and the L1 cache (closest to the CPU).
          1. MMX operations (eight bytes at a time) bring the CPU's data rate up to the limit of the L1 cache. Therefore they're optimal.
        3. The speed of the L2 cache.
        4. The speed of transfers between the CPU and main memory.
      2. The CPU's write buffer requires eight cycles (!) to complete any write. For fast loops, this will be the limiting factor. Therefore, if a loop is limited by this write time, then consider performing further computations on the data within the loop. That is, find something useful to do during stalls.
      3. The data cache is loaded in 32-byte chunks called lines. It was suggested that we plan the contents of 8K of the cache. Large blocks of input data can be force-loaded into the cache by reading one byte every 32 bytes. There's no advantage to force-loading data from an output-only buffer.
  3. Software Maturity: Do Game Developers Really Need It? -- lecture by Larry Constantine, famous software management consultant (of Constantine & Lockwood, Ltd., and the University of Technology, Sydney).
    1. The Capability Maturity Model describes characteristics of different organizations in an attempt to understand their different success rates. It originated from Watts Humphrey of IBM and Carnegie-Mellon University.
      1. The five levels listed in the model relate success, consistency, and government patterns in engineering teams.




        Culture & Government

        Where blame is placed



        Extremely good or bad results but unpredictable.

        Creative, heroic. Formal commitments are not taken seriously.




        Average quality of results is lower than for level 1 but more consistent, hence results can be predicted.

        Committed, stable, consistent.




        Follow defined procedures. Can get sidetracked and disregard the results in favor of adherence to procedures.

        Engineering: shared norms, clear expectations and responsibilities.




        Measure results and evaluate performance

        Engineers are trusted to manage the process.




        The process includes improvement of the process. There is a structure for feedback. The aim is to prevent defects from occurring in the first place.

        Self-managed project teams, empowered to innovate and to change the process.



      3. It can take one to five years to evolve an organization from Level 1 to Level 5.
      4. Benefits of pursuing higher levels (as measured in studies).
        1. More product errors are detected.
        2. More product errors are detected at an earlier phase of development, when they are cheaper to fix.
        3. Delivery time is improved.
        4. Resulting savings generally exceed cost by a factor of five.
      5. Some problems with the advice.
        1. The path to improvement looks more straightforward than it really is.
        2. Improvement requires discipline and commitment, expressed partly by allocation of money.
        3. Participants tend to give the levels too much importance in themselves.
        4. Sometimes assessment is undertaken and completed but no changes are instituted.
        5. The sequence of levels is improperly understood as a rigid progression.
    2. Methodology, metrics, and models in general.
      1. Methods and methodology.
        1. Methodology is just a fancy synonym for methods.
        2. Methods are neither magic nor mediocrity.
        3. Good methods are just folklore (stories and guidelines) in book form.
        4. Good methods are just a description of what good developers do.
        5. Many methods are like training wheels. One uses them until one knows better. The use of training wheels is not the same as real work: the training wheels change the experience.
      2. Metrics.
        1. Metrics give a sense of measurement that makes sense in this context.
        2. Metrics reveal whether actual benefits are occurring.
      3. Models.
        1. Models and diagrams are sometimes an easy way to inflate reports.
        2. Models can be a simplifier to help manage complexity.
        3. Models make it possible to employ intuition and nonverbal pattern-perception abilities.
        4. It's easier to talk about models than to actually build computer programs.
    3. Larry's alternative recommendations.
      1. Good practices are techniques that are known to work (regardless of one's management theory). They include:
        1. Commenting source code.
        2. Code inspections. This has been measured to reveal bugs much faster, and with more accurate low levels of confidence, than testing reveals.
        3. Code walkthroughs.
        4. Project planning.
        5. Good code architecture.
      2. Fitting with existing corporate structures and cultures.
        1. Here are some common cultures in development organizations. To promulgate new practices, introduce new knowledge in a way that's compatible with the existing culture. [This chart omits some examples that he mentioned.]


          Where do the practices live?

          How are practices enforced?

          Where is the knowledge base?

          Where are the decisions made?

          How can new practices be spread?





          Informal culture


          Group pressure

          Folklore, stories


          Promulgate new stories

          Groups following defined methods.


          Inspections and reviews.


          Define a new method.

          Military-type hierarchy

          Authority structure.

          Standards and audits.




          Professional standards

          Research, theory, professional licensing



        3. "Mature" practices should fit with the people and how they already do things. They should be good fun as well as good work.
        4. [Comment by an attendee: Maybe different parts of an organization could have different cultures of governance, depending on personalities and circumstances.]
  4. The Quake Graphics Engine -- lecture by Michael Abrash, id Software. Quake will be the successor to id's Doom. [I expect that he'll eventually publish this material in article and book form.]
    1. General comments.
      1. The knowledge base required to master 3D graphics programming is much larger than with 2D. It takes about twelve months to acquire.
      2. The Quake team built about five or six different engines before they understood what they needed to build. If they had first known that, they would have taken only a month or two.
      3. Personal computers still are not fast enough for ideal performance. They set 10 to 15 frame per second as their lower limit on speed. This meant they would need to limit the richness and complexity shown on the screen.
    2. Objectives for the Quake graphics engine.
      1. Leapfrog the capabilities of Doom.
      2. True, arbitrary, six-degrees-of-freedom 3D behavior. True 3D appearance for as many objects as possible. No sprites.
      3. Highest quality. Stability of image as the point of view changes.
      4. Correct color at every pixel.
    3. The basic problem.
      1. Theoretically, one should be able to sample the correct pixel color from the nearest polygon in the scene, and that would be enough.
        1. In practice, with a simple-minded algorithm, the frame rate varies radically depending on the scene.
        2. No present software rasterizer can provide an adequate frame rate (no worse than 10-15 frames per second). Even if one were adequate, the scene designers would increase the scene complexity and the frame rate would decline.
      2. Two major parts to the task:
        1. Quickly reduce the set of polygons to the relevant ones. This is becoming the real technical challenge.
        2. Draw the right pixels from the polygons: Z-ordered, shaded, with subpixel and subtexel accuracy. Rasterization as a software issue is disappearing as hardware improves.
    4. Part one of culling the polygons: the static world of Quake (walls, floors, ceilings). They had around 10,000 polygons lit by arbitrary fixed light sources.
      1. They combined their entire static world into a single continuous skin.
      2. They preprocessed their world into one big BSP tree.
      3. They would clip away the BSP nodes that were totally outside the view pyramid.
    5. Part two of culling the polygons: discarding polygons that are within the view pyramid but are obscured by walls within the scene.
      1. They tried Z-buffering.
      2. They tried edge or span sorting.
      3. They tried using a beam tree.
      4. Some problems arose in the case of portals (holes in a wall).
      5. The solution was to precalculate the potentially visible set (PVS) for each level. That is, for each leaf in the BSP tree, calculate which other leaves it could potentially see.
        1. This resulted in about 20K of data for each level.
        2. The calculation had limited accuracy: perhaps 50% more leaves were recorded as potentially visible than were actually visible.
        3. It was difficult to get the precalculation correct.
        4. Having this information also speeds the drawing of moving objects.
      6. The size of the level becomes relatively irrelevant, because unseen polygons do not matter.
    6. Avoiding overdraw. At this point, they still averaged 150% of screen pixels drawn (internally) (that is, 50% overdraw). What was worse, depending on the location, overdraw ranged from 0% to 500%, leading to some unacceptably low frame rates.
      1. Working from front to back, add BSP node edges to a global edge list.
      2. The natural BSP order automatically tells which edges are foremost. The BSP node number contains this information.
      3. Walk the scan lines across the screen, working out what to show. The result was zero overdraw.
      4. The edge list cost 10% overhead, but helped in worst-case scenes.
      5. Shared edges were also detected [?].
      6. Concave polygons provided some benefits, being bigger and fewer [?].
      7. This scheme [?] also reduces the number of polygons sent to the 3D hardware.
      8. On what key should the edge list be sorted?
        1. They tried sorting on 1/z, but this lost the BSP partitioning.
        2. Eventually they sort based on the BSP order. (Lesson: BSP trees contain more implicit useful information than you might think.)
    7. Rasterization.
      1. Issues.
        1. Gouraud shading needs triangles or it will not move correctly. [Does that mean that the colors will shift improperly as the object is moved?]
        2. To light the details requires more polygons.
        3. Lighting is not perspective correct.
      2. Solution: surface caching. Precalculate texture lighting on an offscreen buffer.
        1. The final texture-map stage therefore requires no shading.
        2. The inner texture-draw loop requires only 7.5 cycles per pixel.
        3. Fewer different textures are required.
        4. The texture cache requires from 500 kilobytes to 1 megabyte. (Quake requires an 8-megabyte computer).
        5. Textures can be mip-mapped [?].
        6. No rotational variance results.
        7. The resulting lighting is perspective-correct.
        8. Fewer polygons are required.
        9. Any necessary postprocessing can be done on the surface cache before the texture is mapped to the object.
        10. This strategy requires more memory.
        11. The surface cache is too big to fit into the CPU cache.
        12. Individual surfaces can require up to 64 kilobytes [?].
        13. If the lights were to change, it would be slow, because the entire surface would need to be rebuilt. (Quake doesn't do dynamic lighting.) [I think he said there might be a way to fix this.]
        14. When turning the corner into a new room, there is a slowdown as the surfaces are built. (It might be possible to display the first frame with reduced resolution.)
        15. This technique is not a good fit for current 3D hardware, whose texture sizes are limited.
      3. Final step: draw the scene.
        1. At this point, they have a list of 8- or 16-pixel spans and a span drawer.
        2. Their rasterizer is 100% floating point, down to 8- or 16-bit subdivisions [?].
          1. Performance suffers on 486 processors.
          2. On Pentiums, FDIV instructions can overlap with other instructions.
    8. Moving entities.
      1. Four types of representation are used.
        1. More complicated, flexible objects with relevant details are polygon models.
          1. General characteristics.
            1. They range from 50 to 400 triangles.
            2. Some have many frames. Each frame takes up to 500 bytes (not as bad as you might think: they are just vertices).
          2. Implementation notes.
            1. One skin per moving entity.
            2. They couldn't be clipped.
            3. The triangles were drawn with a separate affine rasterizer.
            4. They are Gouraud shaded.
            5. They will be lit dynamically.
            6. Integer only.
            7. Bucket-sort on 1/z batches.
            8. The edge list is good up to 200 polygons [?].
        2. Rectangular objects like boxes, doors, and platforms are BSP models.
          1. These BSP trees get clipped into the world BSP tree and added to the global edge list.
          2. The overdraw prevention is particularly useful for doors.
          3. Within the same BSP leaf, boxy objects are sorted on 1/z. There aren't usually shared edges [?].
        3. Sprites are used for flames. Up close, they don't look 3D.
        4. Particles (and blood) are scaled n-by-n sprites. They have a distinctive behavior.
      2. Z-buffering was performed on the three kinds of non-BSP moving entities (polygon models, sprites, and particle systems).
        1. This prevents sorting errors.
        2. The z-fill entails a 10% cost.
        3. This allows postprocessing: smoke can be stamped on an image at a late stage [?].
      3. The precomputed PVS data is useful for large-scale culling of movable entities.
        1. Level-of-detail (LOD) data is not enough [why is this relevant here?].
    9. Not yet implemented: moving lights.
    10. Conclusions.
      1. An amazing number of techniques are available. Each technique has its strengths and weaknesses.
      2. Try to precalculate and cache as much relevant information as possible.
      3. First simplify the coding and make the data handling uniform (as in the z-buffering applied to the non-BSP moving entities), then optimize the code.
    11. References.
      1. Zen of Graphics Programming, by Michael Abrash.
      2. ___________, by Bruce Naylor (reigning expert of BSP trees).
      3. Procedural Elements for Computer Graphics, David Rogers (McGraw-Hill).
  5. How to Appeal to the Online Gamer -- lecture by Daniel Goldman, CEO of Total Entertainment Network (TEN) ( (
    1. What is different about online gaming?
      1. Customers interact with one another.
        1. Players over age twenty do not like losing to young kids.
        2. Word of mouth is important.
      2. Revenue is based on usage or number of visits.
      3. Game intelligence is centralized, hence not limited by the player's machine.
      4. Persistent game worlds are possible. They can continue to change over time.
      5. Game worlds can be linked to the real world, for example, to real Web pages.
      6. Products retain their interest longer. Players even sometimes return after burning out.
      7. Customers can provide content.
    2. Do you (the game maker) want to do all the work of providing the infrastructure for online gaming?
      1. Customer service.
      2. CDC [customer data center?], guides.
      3. Data security.
      4. Physical security for the headquarters.
      5. Networking and server code.
      6. Work with OSPs (online service providers) and ISPs.
      7. Billing and tracking.
      8. Create the place and the service. (This is more important than the game [he says].)
    3. Areas in which the game service can contribute to the game experience.
      1. Player rankings (but not including beginners).
      2. Tournaments, viewers, and prizes.
      3. Persistent messages. Game history.
      4. Guides, including sysops, hints -- even volunteer guides.
      5. Player matching.
      6. Player handicapping to permit more possible player matches.
      7. Spectator mode (for example, for beginners or during tournaments). (However, cheating becomes possible, as when a spectator reveals a player's poker hand.)
      8. Design of a player's personal identity [that is, avatar].
    4. Miscellaneous points, questions, and answers.
      1. Game worlds are building incrementally toward soap operas.
      2. Broadband connectivity is inevitable, but the rate of adoption is uncertain.
      3. Voice communication is extremely important. TEN's API will include voice support.
        1. It's important to provide a way for a player to disguise his or her voice.
      4. Exclusive relationships between game makers and online services are important.
        1. There are more reasons for partners to contribute and concentrate on the success of the game.
        2. Brand equity is better protected.
        3. When partnering, consider the prospective partner's long-term goals.
      5. Eventually the market will concentrate to about three national online service providers.
      6. What about the nongamer market?
        1. There is a big market: cribbage, chess, "You Don't Know Jack," SimCity, etc.
        2. Big hits are harder to win.
      7. How can low latency be accomplished?
        1. Use only selected ISPs and obtain priority handling for your transmissions.
        2. Offer to reconnect a game player through another service provider.
        3. TEN uses the Concentric network (a national ISP), which uses AT&T's frame cloud.
      8. Regular game tournament nights attract spectators (who would ordinarily be watching TV).
      9. How many people can play a structured game at once?
        1. The limit is the bandwidth to the clients [the players].
        2. If more than about 150 people play, then they tend to divide into groups. 150 is about the most that one player can keep track of.
      10. During 1996 TEN will provide 21 games online, of which two have a persistent environment. They are seeking more game contracts.
      11. What about free trial offers to attract new subscribers? TEN believes in this. (It also induces prospective users to judge the quality of the product.)
      12. Retail products: A game need not achieve success as a retail product before it is released online. If it's first popular online, the "buzz" of public discussion will benefit its retail release.
      13. During long play sessions, allow a natural point in the game at which to sign off.
  6. Design Issues for Online Virtual Communities and Playgrounds -- lecture by Ben Calica, who "is responsible for Apple's Game Technology strategy."
    1. Goals.
      1. Excuses for players to spend time on-line, partying together.
      2. Perceived player benefits to offset the shock of the first monthly bill.
    2. What people enjoy.
      1. Making friends
      2. Chat (a compelling and addictive activity).
        1. Chat bullies can scare people away.
      3. Hope of online dating.
      4. Revealing themselves to other players: often they reveal intimate truths or concoct a great lie.
        1. Self-written biographical summaries are very cool. Make it an automatic part of the sign-in process.
        2. Online romantic correspondence resembles romantic correspondences of pre-electronic eras.
        3. New users disillusioned by a lie [someone else's lie or their own?] tend never to return.
    3. Incentives for players to keep coming back (especially after they receive their first monthly bill).
      1. Regulating offensive conversation.
        1. On the ImagiNation Network, when a comment arrives from another player, there are buttons to reply, to mute, or to complain.
          1. "Mute" informs the other player that someone has muted him/her. You will not be able to hear from him/her again [at least during this game, presumably].
          2. "Complain" immediately reports the remark to a supervisor. The supervisor can view the recent conversations and can expel the offending player.
        2. The truth of someone's self-revelation can be validated by the voluntary exchange of a real phone number.
        3. Vigilantism.
          1. On AOL's Neverwinter Nights, some bullies would claim unused rooms and prey on newcomers. Other experienced players banded together to teach and to protect newcomers, advertising their services.
      2. Virtual rewards.
        1. Possible specific rewards. (The goal is for monthly bills to betoken progress toward a goal, not just a financial burden.)
          1. Pseudo-money, such as Worlds Away's tokens. (After friendship, this is the most effective incentive.)
          2. Access to forbidden areas. (Making them forbidden automatically increases their appeal.)
          3. Cool objects, provided that that their number is limited.
          4. Space allocation, provided that space is limited and that it must be earned.
          5. Building materials for one's personal part of the game world.
          6. New abilities (flying versus walking).
          7. Greater abilities to refine one's online appearance.
            1. Ben likes appearances that are built by assembling a variety of pieces, like Mr. Potato Head.
          8. Gain the ability to design and to build new parts of the online world.
        2. Principles to govern rewards.
          1. Abide by your own rules. (In Habitat, one player killed the monster, but unexpectedly acquired the monster's gun. Game management attempted to revoke the gun but players protested. Eventually they "bought" the gun in free trade for benefits useful in the game.)
          2. Don't give out all your incentives too early.
    4. Multiplayer dramas.
      1. One approach is vactors: real actors hired to play the major roles.
      2. Let the users be the actors for each other.
      3. Hero-based stories don't work.
        1. Most players want to play a major character who doesn't get stuck off-stage.
        2. Most people are poor actors.
      4. Allow players to play bad-guy parts, too.
      5. See Kilobyte, by Piers Anthony.
    5. Ways to make money.
      1. Charge hourly for connect time (but give a reward for time spent on line).
      2. Charge for the software.
      3. Sell advertising space.
        1. Billboards and product placement in the game world.
        2. Display ads while software is being downloaded.

=== End of Nuggets From 1996 CGDC ===

This page is . Best viewed with any browser
[Up to Technical Information]
[All the way back to Phil's home page]
Phil Davidson / / Last modified 30 September 1999