A number of graphical toolkits are now on the market that provide bitmaps and fonts, as well as buttons, sliders, and other graphical controls for use with mouse-based or touchscreen systems. This article, part 1 of 2, examines some of the issues that you have to address if you’re going to implement your own drawing routines.

As more embedded systems employ graphics to provide a feature-rich user interface, more programmers have to face the challenge of writing the software to control that graphical user interface (GUI). After a programmer has mastered the data sheet of the chosen graphics controller, and has managed to initialize the screen and maybe fill the screen with a single color, the programmer has to start addressing some thornier drawing issues. Where will my fonts come from? How do I get a picture from a .BMP file on my desktop computer into the embedded system? How do I redraw one part of the display without corrupting another part?

A number of graphical toolkits are now on the market that provide bitmaps and fonts, as well as buttons, sliders, and other graphical controls for use with mouse-based or touchscreen systems. The facilities described in this article are provided by any good GUI toolkit. A number of cases exist, however, when using a third-party solution isn’t possible or advisable. If your hardware doesn’t conform to a platform already supported by the vendor, you’re going to have to port the toolkit, or pay the vendor to do so. This may be expensive, and in some cases you don’t need all of the functionality for which you’ve paid. The toolkit provides a look. For example, the bevels on the edge of a button are colored to give a 3D effect. If you don’t like this effect, you may have to reimplement the button functionality, losing some of the advantages of purchasing a toolkit in the first place. The look of a particular toolkit may be appropriate for a large display, but you may be using a very low-resolution display.

In this article I’ll examine some of the issues that you have to address if you’re going to implement your own drawing routines. Bitmaps and fonts are two topics that many programmers find daunting at first, so I’ll devote most of my attention to them.


On the PC, a large number of graphics cards use the standard VGA modes as a least common denominator. If you have a small display and are restricted by cost, you may be using a simpler controller like the Seiko SED1330 or the Yamaha YGV610B. Controllers such as these can be matched to a range of different LCD screens with choices in the number of grayscales, the resolution, and the backlight type. However, changing the attributes of the screen will usually only affect the initialization code. Many LCD vendors sell the controller and screen in a single package. It’s also common to see the LCD controller as a feature of the microcontroller. For example, the Motorola Dragonball 68328 processor, used in the PalmPilot, has a built-in LCD controller.

Regardless of which controller you choose, you’ll have an area of video RAM (or VRAM) that represents the image on the display. This VRAM may reside in the controller, and each write to it may require a sequence of writes to the controller, where the mode and address of the write are set before the data byte is passed to the controller. This is what happens with the SED1330 controller. The Yamaha YGV610B or a standard VGA can be configured to allow direct access to the video RAM from the CPU’s address and data bus. This has the effect of mapping the VRAM into the address space of the CPU, allowing faster access than having to write the address on one bus cycle and the data in another.

While in some cases you may wish to have a copy of the image on the screen in normal RAM, this is rarely necessary; therefore, it’s a mistake to use the size of the screen to estimate the amount of RAM that must be set aside for controlling the display. The VRAM must be large enough to handle all of the pixels on the display, but it is often larger. Some controllers facilitate copying from one area of VRAM to another, making such an operation far faster than copying from program RAM to VRAM. For this reason, many programmers store fonts and commonly used bitmaps in the VRAM.


You may wish to put borders on areas of the screen and draw simple diagrams, which may represent the process that the device is controlling. The three basic shapes are lines, boxes, and arcs. Once you can draw a dot, the algorithm for constructing each of these shapes can be found in a number of books on graphics.1,2 While building each shape by drawing one dot after another is simple and portable, you’re likely to want to optimize drawing of common shapes for your hardware. For example, a routine that draws a horizontal line will call drawDot() once for each dot. However, if you know that there are two pixels per byte, writing a single byte can draw two pixels in a single operation.

You’d be surprised at how many applications don’t require any sloped lines—and horizontal and vertical lines are very easy to draw. A box-filling routine can then easily be constructed using the line drawing routine. Don’t implement a full family of shape drawing routines until it has been established that they are all required for your application.

Displaying bitmaps

Once you’re able to draw dots and lines, simple diagrams can be constructed on the display. A much more professional look can be achieved by using bitmaps. They can be used for small icons indicating the use of a certain key, or they could show a large picture which instructs the user. The picture may show the user inserting a card in a slot or opening the device for service.

Bitmaps will consume a lot of program memory (ROM), but in return you’ll get a far higher quality picture than could have been produced with lines, boxes, arcs, and dots. On desktop systems, bitmaps are typically stored in files in one of a number of formats: BMPs, JPEGs, and XBMs are some common examples. These files may be stored on the hard disk and accessed at run time. Alternatively, in a Windows environment, they may be compiled into the application as resources. Typically in an embedded system, a hard disk will not be available and the compilers and other tools won’t support a resource mechanism. The best approach is to convert the bitmap to a constant array of bytes that is compiled into the program. Because it is constant, it will consume ROM but not RAM.

The following structure can be used to describe a bitmap:

typedef struct {
int width;
int height;
const unsigned char *raw;
} BitmapData;

The raw pointer points to an array of bytes, which is the data that represents the picture itself. The width and height must be stored so that the drawing algorithm knows how big the raw array is and how many pixels to plot before moving to the following line. This array should be declared const to ensure that the only copy exists in ROM, and not in RAM.

We then need some algorithm that will convert the bitmap from an array to visible pixels at run time, by copying the array to the appropriate part of video RAM. This algorithm would have to allow for the number of bits-per-pixel used in the bitmap. It would also have to allow for the width of the bitmap and the width and memory map of the display. Listing 1 shows an algorithm for a bitmap which has four bits-per-pixel. Four bits will allow 16 different colors or 16 different shades of gray.

In addition to the raw bytes of data, some other information about the bitmap has to be stored. This information is stored in the BitmapData structure. It has a pointer to the array, as well as width and height information. If a mixture of types of bitmaps is stored, this structure may need to be expanded to store the number of bits-per-pixel, a pointer to a color map (more on color maps follows), or some information to allow the raw data to be decompressed if compression has been used.

The algorithm in Listing 1 calls the drawDot() routine to plot each pixel. By making the code hardware-specific, this algorithm can be improved several-fold. Listing 2 shows how the same bitmap can be copied to a Yamaha YGV610B, taking advantage of the fact that this controller allows its VRAM to be accessed directly. This controller is operating in a four bits-per-pixel mode, so copying one byte into VRAM avoids two function calls to drawDot().

In the YGV610B’s video RAM, the most significant nibble represents a pixel with an even x coordinate. The least significant nibble represents the following pixel, which will have an odd x coordinate. If the bitmap is positioned on the screen at an even x coordinate, each byte in the bitmap data will correspond to a byte in the VRAM, and the bytes can be copied directly. If the bitmap were located at an odd address, then the byte of bitmap data would have to be broken into two nibbles. The low nibble of one byte would then be combined with the high nibble of the following byte to form the byte that would be written to VRAM. This process is obviously many times slower than being able to copy the bytes directly from the bitmap data to the VRAM. In such a case, you could decide to use the fast mechanism when possible and the slower mechanism otherwise. Statistics would suggest that you would use the fast algorithm only 50% of the time, since half of all locations are on odd boundaries. However, because you’re probably in complete control of the application, you could choose to position the bitmaps on even boundaries. In many applications this would be only a minor restriction.

Alternatively, you could construct two versions of each bitmap: one for display on even boundaries and one for display on odd ones. The odd copy would be one byte wider and the first and last nibble wouldn’t be displayed.

While this particular example might not apply to your hardware or your requirements, it demonstrates some important principles. The first is that tuning software for your particular hardware can have a huge impact on performance. This influence has the unfortunate side effect of making the code less generic, meaning that you may have to start over if you change the controller.

The second principle is that the alignment and boundaries of the item being rendered may lead to a number of special cases. You may have to handle some or all of these cases, depending on what restrictions you’re prepared to put on the code that uses your drawing routines. Some of these restrictions may be completely unacceptable in a more generic windowed environment, but embedded systems usually have the advantage of allowing the programmer complete control of all of the code.

Color maps

The examples above did not address the issue of color maps. Simple graphics controllers, such as the SED1330, simply don’t support such a concept, but VGA controllers and others do. On a standard VGA controller, in certain modes, 64 colors are available, but only 16 of those can be on display at a given time. A lookup table exists within the VGA chipset which can be changed by software control. The table dictates which 16 of the 64 possible colors can be used at any given time. This structure means that a single pixel is still represented by four bits, but the color to which that value maps can change at run time. Representing the pixels in fewer bits in hardware means that less video RAM is required, and the graphics are faster because less data must be moved around when something changes on the display.

Color maps have another attractive property. They keep bitmaps smaller because the value of a pixel can be stored in fewer bits. For this reason you may choose to implement a color map in software. Each time a pixel is extracted from the bitmap data, you look up a table in software to see which value you should write to VRAM. If all of the bitmaps are restricted to the same few colors, one mapping may mean that the bitmaps consume far less ROM. Of course you’ve paid a price in speed for this optimization in size. The most extreme case of this is a monochrome bitmap in which each pixel is represented by a single bit. The mapping then simply contains two colors, sometimes called the foreground and background colors. Monochrome bitmaps are a convenient way to store fonts that generally don’t require more than two colors.

Color maps in hardware allow a number of tricks. Flashing can be achieved by drawing in a certain color, and then redefining the color to be the same as the background color. When the color is remapped to its original color, it will become visible on the display again.

Many smaller controllers that manage a number of gray shades have no need for such a concept in the target software. However, you may have a need for a color map in the software that extracts bitmaps for you from some standard file format, such as BMP. We will examine this conversion presently.

Transferring images from the PC to the target

Returning to the topic of bitmaps, we still haven’t solved the problem of converting a bitmap produced by some commercial drawing package into the sort of array that can be displayed by the algorithm shown in Listing 1. The program uses a simple color mapping that happened to work for the pictures I produced from PaintShop Pro, which involved adding one to each of the color values. In other cases, a lookup table may be required to get the appropriate colors. The program doesn’t interpret the color map stored with the BMP. Whether you’ll need to do this will depend on whether your bitmaps all use the same color map. See Charles Petzold’s article, “What’s New in Bitmap Formats,” for C code examples of how to read the BMP format.5 The Murray book provides an extensive explanation of the BMP format and every other bitmap format that you’re likely to encounter.

Because the arrays can get quite large for bigger bitmaps, storing each one in its own file is best. Then allow access to the bitmap with an extern directive in a header file. This makes it more convenient to change a single bitmap if one image is updated or removed.

Listing 1 and bmp2arr provide a simple way to get an image from the PC to a display. If you’re using a large number of images, some compression may be required. Run Length Encoding (RLE) works quite well for pictures, and ultimately the less detail in a picture, the more compression will be achieved. To implement this method, the bitmap converter would have to compress the data, and then decompress it as it’s being displayed. A small reduction in speed can be traded off against large savings in storage space.

I’ve shown one approach to implementing bitmaps. Another is to save the bitmap in some standard format in your PROM and render it. You can use this approach with the free LibTiff library,6 which allows you to read the TIFF file format. While adding this library to your program may take up more code space than a home-grown approach, you can make up for this by taking advantage of some of the TIFF compression features.

Using fonts

Fonts may at first appear to be a complex topic, but a font is really just a collection of bitmaps organized into an array so that it can be indexed according to ASCII or some other character encoding. Those bitmaps are generally one bit-per-pixel, taking up less room than the multicolored bitmaps I’ve described, though there’s no reason you couldn’t design a set of multicolored characters.

Most fonts used on desktop systems are scaleable, spline-based fonts that allow characters to be generated at any size, on screens of any resolution or aspect ratio. Each character is effectively stored as an algorithm that can generate a bitmap of the appropriate size when required. But this character generation is compute-intensive. The software is also complex, so if you do require spline-based fonts, seeking a third-party rendering library might be a wise move. BitStream is one company that can provide fonts and font-rendering software.7

In an embedded system, the CPU cycles required to process complex character definitions may not be available, and even if they are, they might have better uses. Another reason that the work required from the programmer and the CPU for spline-based fonts may not pay off is that the resolution available on some of the small screens used in embedded systems may mean that shapes will be represented crudely, regardless of the rendering mechanism.

So in many cases a couple of fixed-size fonts is sufficient for an embedded system. The exception to this is printers. Printers have advanced needs in the area of font rendering, which would not be satisfied by the mechanisms I outline here.

If we accept that we have to store each size individually, then each character has to be generated as a bitmap. One option is to make the font fixed-width, as is done in DOS, for example. While this approach is practical and simple, a far better appearance can be achieved by sizing each bitmap to just surround the character, to create a proportional font. On small displays this has the advantage that strings take up less horizontal space. Once we’ve chosen to implement proportional fonts, we now need to store the size, as well as the bitmap data. We also need an offset from the baseline of the text to allow characters with descenders such as “g” and “p.” The following structure describes a single character:

typedef struct {
igned char offsetX;  /* base-     line of character */
signed char offsetY;
unsigned char width;  /* width      of char in pixels (bits) */
unsigned char height; /* height      of char in pixels (bits) */
const char *bits; /* raw bitmap
data */
The bits array is an array of bytes. For a character which is eight bits wide or less, a single byte is used for each row of the character. If more than eight bits are required, the number of bytes required per line is width divided by eight and rounded up to the next whole integer. So a sample of the bits for two letters shows a small letter that is one byte wide, and a larger one that contains two bytes for each row:

static const char small_letter_A[] =
0x20, /* 00100000 */
0x50, /* 01010000 */
0x70, /* 01110000 */
0x50, /* 01010000 */
0xD8  /* 11011000 */
static const char big_letter_A[] =
0x03, 0x00, /* 00000011 00000000 */
0x03, 0x00, /* 00000011 00000000 */
0x05, 0x80, /* 00000101 10000000 */
0x05, 0x80, /* 00000101 10000000 */
0x09, 0x80, /* 00001001 10000000 */
0x08, 0xc0, /* 00001000 11000000 */
0x10, 0xc0, /* 00010000 11000000 */
0x1f, 0xc0, /* 00011111 11000000 */
0x10, 0x60, /* 00010000 01100000 */
0x20, 0x60, /* 00100000 01100000 */
0x60, 0x70, /* 01100000 01110000 */
0xf0, 0xf8, /* 11110000 11111000 */

To render a single character, we need a function to plot a dot for each set bit in the array. Listing 3 shows an algorithm for this, assuming that you already have the ability to draw a dot. As in the case of bitmaps, a far more efficient algorithm could be used if the routine were made hardware specific. The second function in Listing 3 prints a complete string by simply rendering each character in turn.

With such a simple format, designing some of your own characters is quite easy. I often find it useful to design characters to match icons that are printed onto the front panel of the device. The icons can then be referred to easily in the middle of a sentence. (If you have to do such designs in limited resolution, using MS-Paint in magnified mode with the grid turned on is a good idea. Then type the binary values into a spreadsheet to convert them to hex.)

Obtaining fonts

Rather than designing the bitmaps individually, you’re generally better off converting a font from some standard font to an array of bitmaps. The Bitmap Distribution Format (BDF) is one common format which contains a bitmap for each character, as well as size and offset information. This is a good match for a proportional font implementation. A description of the BDF format can be found in the Murray book or on the Internet.8

BDF was originally developed as part of the X Windows system, and a large number of fonts are available in this format on the Internet. A simple search for BDF should lead you to plenty of them, but a couple of sites are worth noting.9 One disadvantage of some of the tools that manipulate the BDF format is that they’re not as well supported as they once were because most of the desktop world has moved to spline-based fonts.

The companion download to this article contains the source and executable for a program called fconv, which will convert a BDF file to a C file in the format that is used by Listing 3. If you already own a font that you wish to convert, it may be possible to find a tool that will convert from what you have to BDF, and then—using fconv—to an array of characters in C.

If you obtain fonts from the Internet, check the copyright. Just because you can download something doesn’t automatically mean that you can use it. For most purposes, plenty of free fonts are available.

When searching for fonts, you may wish to implement the same font in a number of styles to allow bolding, italicizing, or underlining. These attributes lead to completely separate fonts and cannot usually be generated from one font. However, I once saw a program that performed the following operation on each character to produce a bold version of it:

character = character | (charac-   ter << 1)

This operation makes the character thicker, and hence bolder, but doesn’t produce a good quality result, and should only be used if the space occupied by another font causes a problem, or if the use of bold is very rare.

International fonts

Many of the larger character sets, like Kanji, require a double-byte font. BDF versions of most character sets exist, but the issue is made far more complex due to the fact that a choice of encodings exists. For Kanji, you have a choice of Unicode, Big-5, Shift-JIS (Microsoft’s format), and a number of others. When you’re dealing with English, the decimal number 65 will always represent the letter “A,” but the relationship is not so standard in other languages. You’ll need to be sure that the translator delivers text in the same encoding that you’ve used for your font. If not, you may have to find or write some program to convert from one encoding to another.

The size of the larger sets can be daunting for a small embedded application. One approach I’ve used was to write a utility that scanned the text for all of the unique characters and stored a list of them in a file. I then altered my font conversion utility to only convert the characters that were actually used. For Kanji text, I only had to convert 200 of a possible 5,000 characters. This mechanism wouldn’t be appropriate if the user were able to input characters, but many embedded systems only need to display strings that are fixed at compile time.

Having led the horse to water

This introduction has given you a feel for some simple ways to get bitmaps and fonts to appear on a display. Things get a lot more interesting once we want to remove items from the display or move them from one location to another. We’ll investigate that next month, in part 2 of this article. esp

Niall Murphy has been writing software for embedded systems for seven years. He is the author of Front Panel: Designing Software for Embedded User Interfaces, published by R&D Books. Murphy’s writing and consulting business is based in Galway, Ireland. He welcomes feedback at His Web site,, contains the listings referred to in the article.


1. Abrash, Michael. Zen of Graphics Programming. Scottsdale, AZ: The Coriolis Group Inc., 1996.

2. Foley, James, Andries van Dam, Steven Feiner, and John Hughes. Computer Graphics: Principles and Practice, Second Edition in C. Reading, MA: Addison-Wesley Publishing, 1996.

3. Horton, William. The Icon Book. New York: John Wiley & Sons, 1994.

4. Murray, James D. and William van Ryper. Graphics File Formats, Second Edition. Sebastopol, CA: O’Reilly and Associates Inc., 1996.

5. Petzold, Charles, “What’s New in Bitmap Formats: A look at Windows and OS/2,” PC Magazine, September 11, 1990, p. 403.

6. LibTiff, written by Sam Leffler, is available at libtiff/

7. BitStream Inc.,

8. The BDF Specification document is available at http://partners.adobe. com/asn/developer/pssdk/CONTENTS/FONTS/GENERAL/DOCS/5005.PDF

9. Sites for finding fonts in the BDF format: project/X11R6dev/sun4m_53/src/xc/ fonts/bdf/

Listing 1: Algorithm for a bitmap that has four bits-per-second
Listing 2: The algorithm in Listing 1 rewritten for a Yamaga YGB610B
Listing 3: Algorithm to render a single character

Send comments to: Webmaster
Copyright © 1999 Miller Freeman, Inc.,
a United News & Media company.