Flasheando

Flash log en "La Nube"

Game Development: About Risk Assessment

Anything we do involves some amount of risk, and the decisions we made about anything should be made with a careful assessment of the risks we are assuming when taking a certain course of action.

Game development is full of decisions to be made, and therefore it is also full of risks that we must evaluate. In this article I want to go over the most common decisions made in everyday game development, and the risks they usually involve.

I think it is incredibly important to know the possible outcomes of our decisions, and to evaluate what’s best given a situation. Some risks are more affordable than others, and they alone may outbalance any benefits obtained from a certain course of action.

Taking and avoiding risks:

Usually taking risks comes with benefits, otherwise risks would be irrelevant and this article wouldn’t exist. The problem is knowing when and where to take risks and where to avoid them. And the most important thing is to know what risks are present when you take a decision.

I have had some experience with decisions involving the taking/avoiding of risks in the past, with mixed results.

I remember a simple example of this when we were forced to take a harsh decision while developing an isometric game in Flash. We were using an open source library to render the isometric grid and, because we needed to do things the library wasn’t designed for,  it started performing quite badly. The situation gave us two clear courses of action: we could either try to tune the library’s z-sorting algorithm to do what we needed faster, or we could write our own isometric library specifically tailored to what we needed.

The team decided that due to time constraints the best decision was to tune the library’s code. That way we didn’t have to write additional code for rendering, etc. We started to work only to realise that no matter the efforts we made on the z-sorting algorithms, we were still behind in performance. After 3 days of hard work (the task being estimated at just 2 days) and a quite thorough research on the cost of going deeper into the library, my leader and I concluded that the best decision was to write our own system for the whole thing. The whole sprint needed to be rescheduled because of this, and some features were postponed. The new code surprisingly took just over a week to write and integrate into the game, and the performance gains were even greater than we expected.

Moving on to connect the example to the subject of this article, my conclusion is that when we planned our course of action we had a set of certainties and uncertainties, all of which was plain information to be evaluated. We clinged to the certainties too fast, disregarding any uncertainties (and at the same time inadvertently disregarding risks).

We assumed that refactoring a single class in the library would surely take less time than writing a whole library, and we were partly right, but we failed to assess the risk of the fault being elsewhere in the library’s code. We could say that our first choice had a certainty of 2 days of work and our second choice had 5 at minimum. However, taking the risks we mentioned earlier into consideration, while having a lower minimum workload the first choice presented a risk of a much higher maximum load, which I now estimate at 7 or 8 days including the time to realise we needed to go further down the library’s code. What I am trying to say here is that uncertainty carries risk with it, and that it is often a risk we do not measure (often due to overconfidence).

Do I think we would have taken the decision to write from scratch if we did have the full risk assessment? No. I think we would have taken the same path because time was of essence, and our project leaders would have gambled for the lowest time possible anyway. We would have been, however, much better prepared for the extra time it finally took:

  • Our estimation of the workload would have been more accurate.
  • The Project leader would have been able to foresee the delays.
  • We would have had contingency plans for the delayed upcoming features.
  • Upper management would have been aware of the risks and prepared.


I intend to continue this article, moving on to a more general example in the game industry.

Development Anti-Pattern: Neverending Crunch Time

Crunch time is a somewhat unavoidable part of game development, and most experienced developers know that they will enter crunch mode at one point or another. When crunch mode is done for too long or too often, however, a larger problem may very well exist.

Symptoms:
  • The team is doing overtime routinely.
  • Little or none of the overtime done is actually planned or directed at a specific goal.
  • Overtime periods account for an appreciable portion of the total development time.
Likely causes:
  • Not enough time allocated for development or an undermanned team.
  • Bad prioritization and distribution of tasks.
  • Inaccurate or outright wrong task estimations.
  • Too many last minute changes.
  • Preceding “Loss of Aim” problem.
  • Underlying “Bag o’Bugs” problem.
Effects on development:
  • Stressed and exhausted team with decreased performance.
  • Bad or worsening work atmosphere.
  • Higher than average number of bugs are introduced.
  • May lead to “Loss of aim” and/or “Bag o’Bugs” (and often does).
Effects on finished product:
  • Game ships being incomplete (cut content to make it to release date)
  • Game probably ships with a great number of open bugs.
  • Game lacks refinement altogether (refinement is overlooked in the turmoil).
Prevention (before it happens):
  • Make sure the dev team gets to estimate their tasks.
  • Doing overtime en-masse is usually a bad sign, do not disregard it and ask for more time or more devs if it happens.
  • Pay overtime. This decreases its effect on team morale and makes you want to avoid it due to its cost.
  • Too many last minute changes may mean flawed game design, pause development to reevaluate features in this case.
Solutions (once it happens):
  • You most likely require more time. The earlier this is realised, the easier it will be.
  • Stop crunch time right away. Let the team breathe and gain perspective over the situation.
  • Get a few days to evaluate the team composition, performance and size, you may be lacking people (either in quality or in quantity).
  • If estimations are being done, ask yourself and the team why are estimations failing. if not, start estimating tasks.
  • Get the designers some time to rethink the most troublesome parts of the game (those that bring the most last minute changes).

Back to the anti-pattern list

Development Anti-Pattern: Bag o’Bugs

Bugs will always exist, even when the code is well written and tested, you ought to have some bugs. This anti-pattern arises when you are having too many bugs, up to the point of not being able to cope with them at all. Of all the anti-patterns, this is the one that carries the most direct responsibility from programmers.

Symptoms:
  • Bug list is soaring above the 100’s close to launch.
  • Bug numbers do not visibly decrease or even grow, despite time spent fixing bugs.
  • Fixed bugs start introducing new bugs or reopening older ones.
  • The dev team shows concern about fixing certain things.
Likely Causes:
  • Lack of QA during development.
  • Rushed core features.
  • Reusing buggy and untidy prototype code.
  • Unskilled, badly motivated or stressed dev team writing bad code.
  • Underlying “Neverending crunch time” problem.
Effects on development:
  • Daily tasks become stressful and unpleasant.
  • Overall development is frustrating both for the company and the employees.
  • Estimating workload becomes inaccurate and eventually useless.
  • Not fixing something becomes safer than doing so.
  • Likely to produce “Neverending crunch time” (and often does).
Effects on finished product:
  • Game lacks some of the promised features(feature too buggy, shipped disabled).
  • Game ships with a great number of open bugs.
  • Game lacks refinement altogether(buggy game, plus bug-fixing cuts on refining time).
Prevention (before it happens):
  • Hire good quality developers.
  • Make sure you are tossing away any prototype code written.
  • Give the dev team the chance to estimate features and then try to give them the time they need.
  • Attempt closing features before testing them and test any closed features right away.
  • Do regression testing as often as possible.
Solutions (once it happens):
  • Hire good quality developers.
  • Get a few days to stop development and evaluate the state of the software.
  • Aim at the worst sources of bugs, get the devs to refactor these conflict points.
  • Evaluate the possibility of remaking some of the worst features from the ground up.

Back to the anti-pattern list

Development Anti-Pattern: Loss of Aim

Sometimes developers have doubts about what a feature should do, sometimes the design document is a bit hard to follow. When these problems are taken to an extreme, the team becomes lost and what’s being developed loses connection with the designer’s ideas, often ruining a good concept and failing to deliver the proposed game as it was intended.

Symptoms:
  • GDD does not reflect developed features.
  • Devs don’t know what the game is about anymore or are having trouble with picturing game features.
  • The dev team needs too much guidance from designers when implementing new features.
  • Discrepancies between expected game behaviour and implemented behaviour.
Likely Causes:
  • Working with an incomplete or badly written GDD.
  • Not enough time spent on prototyping.
  • Aggressive team composition changes.
  • Poor team communication.
  • Underlying “Neverending crunch time” problem.
Effects on development:
  • Slower development due to lack of feature definitions.
  • Cutting corners that shouldn’t be cut starts hurting gameplay.
  • Great deal of effort becomes misplaced in less-prioritaire issues.
  • May lead to “Neverending crunch time” if not addressed quickly.
Effects on finished product:
  • Game features lack cohesion.
  • Game “cover” promises stuff that is missing on the game.
  • Game lacks refinement altogether(mainly from lack of cohesion).
Prevention (before it happens):
  • Carefully examine the GDD with the whole team before development begins, make sure there are few or no open questions.
  • Use prototyping to see how features connect and to test gameplay. you can keep prototypes as functional reference for devs.
  • Assign one or more people to maintain the GDD updated and to keep the team on target with the game idea.
  • When bringing new people in, give them some time to read the GDD and know the existing features.
Solutions (once it happens):
  • Get a few days to stop development and get the team together for reviewing the GDD.
  • Focus production on bug-fixing or refactoring while the GDD is brought up to date.
  • Assign one or more people to maintain the GDD updated and to keep the team on target with the game idea.
  • Resume development with renewed priorities, put changing what doesn’t comply with the new GDD first on the list.

Back to the anti-pattern list

Game Development Series: Anti Patterns

As of today I have accumulated a bit over 4 years of direct game development experience in three different companies and seven different games. At this point in my career, some things are starting to appear as patterns in my head, attitudes and situations that seem to repeat themselves from company to company.

My aim in this series articles is to explore some of the less desirable of those patterns, the “anti-patterns” of game development, analogous to those often mentioned in software development and to their cousins, the “code smells”.

I call these my own “game-dev smells”. I want to see why they appear and  to think how could they be avoided or solved.

And without further delay, here’s the list:

Loss of Aim

Bag o’Bugs

Neverending Crunch time

Optimizing the Artists

Working on flash game development has taught me a couple of things so far. The most important one was certainly the importance of teamwork, and of how it can influence the quality of the product in all aspects. When I mean all aspects I do mean ALL, so performance does not escape this premise.

Not long ago, this knocked on my door and I suddenly found myself working towards making performance a concern of the game team as a whole. I think it was a nice challenge and a very rewarding experience…Because of this is that I decided to write this article about how my company coped with the challenge of  “Optimizing the Artists”. I hope readers can find some interesting insights on the subject.

The Problem:

As I wrote in older posts, performance in Flash is closely tied to graphics rendering. Code performance is quite inconspicuous when compared with the bog-downs that heavy graphics can cause.

In the industry I come from, game artists are usually recruited from the lines of graphic designers and illustrators accustomed to working in advertising, logo design or web  -sites-. The same goes for animators. They lack of experience in working with developers and especially with game developers that usually take -or should take- performance very seriously.

Art teams are usually preocupied with the quality of the art, so sadly it could never occur to them that detail and graphical quality has its downsides on the game’s performance. Since quantity of nodes in the vector graphics or size of bitmaps are variables that can only be controlled from the art perspective, Art is indeed part of the performance problem…

Games have its own rules, and one of these is that the game must run smoothly at least. Websites and billboards obviously don’t need this kind of attention. As the ones naturally worried about performance, we developers usually find ways of optimizing graphics from the code side (blitting, rasterization, dirty rectangles, etc), but there comes a pint in which nothing we do seems to suffice, and performance becomes a very time consuming and mind boggling business. When this happens, we must look to the opposite shore, and seek help…

We must find ways of convincing the art team into hopping aboard the performance optimization train so that performance handling in the project will become a shared responsibility and a general concern, so Art becomes part of the solution.

The solution: Educating your artists:

From the company’s perspective, the objective of the education must be to build an art team that has both the knowledge and resources for tackling the most common performance problems. In line with this, the fact that bad game performance affects them as well cannot be stressed enough, and they will quickly understand where and how it affects them once they grasp the very basic concepts. After this is achieved, the art team alone may be capable of finding its own solutions and actually generate the know-how for better performing but still detailed art.

Hands on the problem:

Art people are usually cool and laid back (at least those I know), so approaching them with technical terms and performance charts can be extremely difficult. Most of the artists in my company had little to no industry experience at all, and the concepts I needed to pass through were at times too far-fetched. When I started with this “crusade”, I thought I was doomed to failure in advance, with no chance of success. The first step of the road was therefore to convince the people who cared that performance was important and that there was much to be gained from optimizations on the art side. The first ones to understand this will always be the ones on the Dev side, so I started there and then luckily everything started moving towards the art side, pushed by the tech leads.

Eventually, the thing reached someone with authority and knowing performance was made kind of mandatory by the company, so my “educational” approach became simple and straightfoward…I wrote down the important stuff in a powerpoint and prepared an improvised set of performance talks to go over the points and dissipate doubts and concerns. This kind of “structured”, lecture-type talks were needed because of the large number of people that needed to hear this (80+ artists), so it even required several talks. I would have personally preferred something more informal and/or dynamic. In smaller groups just talking about it individually with them, or with the lead artists should work wonders.

This, however, does not end with the performance talks. The most important part after spreading the word and teaching is the follow-up. Without control, all the talks will be ephimerous and the concepts will wash out of the Artist’s minds quite quickly…If the concepts are not applied and field-tested, all the effort will be nullified. One must keep talking about it, measuring, profiling, answering questions, etc. Until one day it starts to walk on its own, and BAM! you got your professional team of Game Artists!  this may not do away with the ever-present performance problems, but now you will have the power of teamwork and the know-how of people that come from a very different background on your side…

Ill have to write about this subject again to either confirm or throw my theory to the trash can, depending on how it evolves, for we are still in the “measure, profile and talk” phase.

Cheers!

 

Flash Performance Series: “Quality Adjustments”

Quality adjustment:

In already-finished proyects, the largest(and easiest) performance gains in Flash are to be made by adjusting the overall quality of the renderer, because this practice doesnt require to change anything inside the proyect.

In the ideal case, Quality settings should be left in the highest possible setting so the app would take advantage of flash´s clear and well-defined graphics, Therefore lowering the overall quality must be a last resort after all other possible optimizations have been done.

Flash quality adjustments affect primarily image Anti-Aliasing(covered next), which is a time consuming but rewarding visual effect applied to everything that Flash renders.

NOTE: Flash´s default display quality is “HIGH”;

Flash Anti-aliasing:

Anti-aliasing(AA) works in the same way for flash as it does for other applications. It consists in a complex process of adding mid tone pixels to the jagged edges of images or text so the human eye will perceive a smoother image instead of jagged lines that look like a ladder.

In this process, the platform calculates the pixels that are on the edges of lines and “smooths” them by adding more pixels of different colors (usually an average between line color and background).

Highest levels of AA in more complicated applications even tune alpha values in these pixels to further increase the (apparent) quality of the lines. Complicated as it is, this process is costly for the CPU, and adds lots of work to the rendering cycles. This applies to both image and text rendering.

Anti-Aliasing (quality) levels:

•StageQuality.LOW—Low rendering quality. Graphics are not anti-aliased, and bitmaps are not smoothed.

•StageQuality.MEDIUM—Medium rendering quality. Graphics are anti-aliased using a 2 x 2 pixel grid, but bitmaps are not smoothed.

•StageQuality.HIGH—High rendering quality. Graphics are anti-aliased using a 4 x 4 pixel grid, and bitmaps are smoothed if the movie is static.

•StageQuality.BEST—Very high rendering quality. Graphics are anti-aliased using a 4 x 4 pixel grid and bitmaps are always smoothed.

Back to series Index!

Flash Performance Series: “Cache As Bitmap”

The “CacheAsBitmap” flag:

When you ask Flash to use cacheAsBitmap on an image, it stores a copy of the vector image´s point array in memory (for swapping), but uses a converted version (converted to bitmap) for rendering, so the vector image doesnt need to be calculated and redrawn every frame. Thanks to the saved point array, display objects can be swapped to and from bitmap format as much as one needs.

USAGE: Use on complex vector images that don’t transform, for example complex background images (moving or not), non-animated sprites and most user interface elements.

ADVANTAGES: Bitmaps, unlike vectorial images, are not recalculated on every frame by the flash player unless they suffer a transformation. Thanks to this, cacheAsBitmap allows for some performance gains when handling static and complex vectorials.

WARNING: Animated (transforming) objects are going to be redrawn regardless if they are bitmaps or vectors, so using CacheAsBitmap on animated Sprites or MovieClips would actually make it slower than a vector, forcing flash to recalculate the vectorial image, redraw it, save it and get the bitmap copy done once every frame for as long as the object is being changed internally.

Back to series Index!

Flash Performance Series: “Vectors & Bitmaps”

Animation:

Vectors are mathematical representations of images and they store only numerical values needed to draw the image from scratch, and therefore have the advantage of being very small in size, which is very useful for saving bandwidth.

The problem with vectors is that all advantages in size then become disadvantages in processing requirements. The format of vector images means that flash will draw the image from scratch in each rendering cycle (frame). This is an increased load if you compare it with bitmaps:

Flash has to calculate the vector´s nodes and resolve pixel positions in addition to drawing pixels in their places. With Bitmap Images, only the latter is done, and therefore less processor time is consumed in the rendering cycle. You can improve this through AS3 by using the CacheAsBitmap flag in DisplayObjects. This will require flash to calculate the pixels only once, and then (as long as the vector doesnt change) draw it as if it were a common bitmap image. More on this on the next section.

Scaling:

We know that scaling bitmaps is avoided at all cost by Flash developers because of the serious loss of image quality it brings.

On the other hand, we know that scaling a Vector image is very easy to do and there is no loss of quality. In fact, because Vectors are rendered at runtime, you can scale a vector to be many times larger than its original size and still have defined lines and good looking color fills.

All this definition has its price in terms of CPU consumption, and the larger the amount of scaled vectors on-screen is, the bigger the burden on the Processor will be. Keep this in mind when scaling vectors: some is O.K., but f you need to scale the whole stage, consider redrawing the vectors or scaling its individual internal parts.

Back to series Index!

Flash Performance Series: “Resolution & redraw regions”

Resolution & redraw regions:

Generally, the performance bog-downs in Flash are reduced to how much is being rendered at a specific moment, namely, what percentage of the screen is being redrawn. The larger the size of the screen, the larger the quantity of pixels that flash has to redraw each frame, so in all aspects, a 800×600 flash app will be slower than a smaller one.

Apart from this, the size of independant redraw regions will also influence the performance rates. This means that in some cases a complicated but small image(a small star or a drawing of a person) may be easier for flash to render than a much simpler but much larger shape ( maybe a square or rectangle). Also, rendering small things in the edges of the screen could potentially enlarge the general redraw region, giving flash more work to redraw a whole frame.

Because of these two things, it is always a good practice to keep both overall app. Resolution and redraw region sizes as small as possible, as well as trying to avoid animations & effects on the edges of the screen.

Back to series Index!

Seguir

Recibe cada nueva publicación en tu buzón de correo electrónico.