Testing

Testing is the process of finding discrepancies between specified and actual behavior. These discrepancies are caused by errors in the source code. Subsequently determining and correcting their cause is known as debugging ”this will be covered in the Debugging section.

Software failures can also occur when an application hits environmental constraints, such as running out of memory. Although these eventualities are often neglected by application specifications, it is still very important that applications gracefully handle them. Therefore, it is important that testing procedures incorporate testing, such as out-of-memory ( OOM ) testing, to ensure that applications operate successfully across a range of environmental situations.

Table 13-1. User Heap Macros All macros refer to the current thread's heap and are defined only for debug builds.

Macro Name

Description

__UHEAP_MARK

Marks the start of heap checking. Must be matched by a corresponding call to __UHEAP_MARKEND or __UHEAP_MARKENDC . If previous calls to __UHEAP_MARK heap are still open , a new nested level is created.

__UHEAP_MARKEND

Marks the end of heap checking from an earlier call to __UHEAP_MARK . If there is no matching call, then a USER 51 panic is raised. All memory allocated at the current nest level must have been deleted or it will panic with the address of the first orphaned heap cell .

__UHEAP_MARKENDC(aValue)

Marks the end of heap checking from an earlier call to __UHEAP_MARK . If there is no matching call, then a USER 51 panic is raised. It expects aValue heap cells to still be allocated at the current nest level or it will panic with the address of the first orphaned heap cell.

__UHEAP_CHECK(aValue)

Checks that the number of cells allocated at the current nested level of the heap is the same as aValue . If not, it panics with the line number and source file this statement exists at.

__UHEAP_CHECKALL(aValue)

Checks that the total number of cells allocated on the heap is the same as aValue . If not, it panics with the line number and source file this statement exists at.

__UHEAP_FAILNEXT(aAfter)

Simulates heap allocation failure after every aAfter call to perform a heap allocation, for example, calls to new .

__UHEAP_SETFAIL(aNature, aFrequency)

Simulates heap allocation failure depending on the values of the supplied parameters.

aNature : the nature of the failure that is imitated [*] .

aFrequency : the frequency of failure.

__UHEAP_RESET

Cancels simulated heap allocation failure.


[*] For example, RHeap::ERandom or RHeap::ETrueRandom ”see the definition of TAllocFail in the documentation for RHeap in the SDK documentation.

This section considers some basic testing strategies and offers information about how you can implement them. The tools that are available to aid testing will be examined, along with some useful techniques for testing Series 60 applications. It is beyond the scope of this section to discuss software testing techniques in general, but many good sources exist on the subject. The software testing FAQ at http://www.faqs.org/faqs/software-eng/testing-faq/index.html should be a good starting point for information on this subject.

Strategies for Testing

Exhaustive testing takes a lot of time and effort to perform, but putting some thought into the methodologies covered below can help to lower the overhead involved.

Test Teams and Procedures

The testing setup for a project needs to be decided before that project commences. The first order of business is to organize a testing team for your project. It is important that any programmer involved in writing code for an application, or a component of an application, should not be included in the testing team responsible for that particular application or component. The reason is that their familiarity with the code compromises the whole process ”as they know the code intimately, they may be inclined to test based on how the application does work rather than how it should work. Significant benefits derive from the objective viewpoint that independent testers bring. Apart from ensuring your application's integrity, testers often give essential feedback concerning the usability of the UI.

Second, you need a test plan or test specification. This will specify the methods , inputs and expected results for every feature to be tested . It should not only include the testing of the correct use of the application, but should also consider how the application responds to incorrect input, unexpected events and so on. The test plan can be formulated prior to, or in parallel with, development. However, you should be careful not to write a test plan that simply describes how the application has been implemented ”it must describe how you want the application to work.

Unit, Integration and System Testing

Software systems often comprise a number of different components. The typical testing approach is to begin by testing these smaller, more fundamental constituents in isolation. This means it is possible to begin testing before a complete application is written. Once confidence in the individual components is secured, they can be assembled and tested as cohesive subsystems. Because the internal integrity of each smaller component has already been ensured, problems that arise in subsystem tests are likely to be grounded in integration difficulties. By progressing in this manner, from small individual components to holistic testing in multiple increments , the software can be comprehensively tested in an efficient manner. This is the rationale behind unit, integration and system testing .

As the above overview suggests, a unit is an individual software component, and unit testing will test its behavior in a stand-alone environment. Sometimes this will involve the creation of test code that is not part of the release implementation, since usually it is other internal components that invoke its behavior. (Series 60 allows for the specification of such test code as a test project in the bld.inf file ”see the entry for prj_mmpfiles in the SDK documentation for further details.)

Integration testing ensures that combined components behave correctly together. In general, all of the components will have been separately unit tested, and so the focus of such testing is on communication between components. This level of integration testing is sometimes referred to as " integration testing in the small ."

System testing will discover defects that are properties of the entire system (or at least a deliverable product). It is a high level of integration test, but it also encompasses testing that the application interacts correctly with its environment (sometimes known as compatibility testing or " integration testing in the large "). Testing on hardware is important when performing this sort of end-to-end testing, as there can be subtle differences between hardware and emulator testing (see Differences of Testing on Target versus Emulator later in this chapter).

Functional and Structural Testing

While unit, integration and system testing addresses the scope and progression of the testing effort, functional (or behavioral) and structural testing are two different methodologies used to generate the tests themselves , regardless of scope.

Functional testing checks that the behavior of your component corresponds to its specification. It is generally " black box " testing. There is little or no need to understand the internal workings. The values of parameters passed into this "black box" are either random, or based on attributes of the specification.

Structural (or " white box ") testing, however, uses specific internal knowledge of the component to guide selection of the test data, in order to test structural integrity. Developers may be able to provide significant guidance to testing plans formulated under this methodology, since they will be intimately aware of potential weaknesses.

Boundary-value analysis is a type of structural testing that uses test values that are on the edge (inside and outside) of allowable value ranges. This sort of testing can be much more useful than just testing random values.

Both of these methodologies are important in designing component tests, but their differences become transparent in the implementation of the test plan.

Performance, Stress and Recovery Testing

Stress testing involves subjecting a system to a load that exceeds expected operational ranges. This testing is particularly important for applications designed to run on devices with very limited resources. Essentially, the system is pushed to breaking point to ensure that an application fails cleanly, without losing data or orphaning resources.

Performance testing is similar to stress testing, but with a reduced load that is more representative of normal use. Contrary to stress testing, the aim of performance testing is to benchmark an application's responsiveness under normal loadings. For example, consider an application that must receive multiple SMS messages or handle multiple key presses. Performance testing would determine how many messages or key presses the application could handle within a specified period. This benchmark would then be used to measure the performance impact of subsequent code changes.

The emulator keyboard shortcut Ctrl+Alt+Shift+Z sends the keys A through J in fast sequence to the application, to test its ability to handle rapidly repeated keys. A full list of emulator shortcut keys can be found in Appendix A.


With Series 60 development, a common revelation of stress testing is that the application UI's responsiveness may suffer from long-running background tasks. Such tasks can prevent applications from handling user input in a timely manner, but this can typically be remedied by breaking the offending task into multiple subtasks using Active Objects.

Performance tests should also detect problems caused by occurrence of events which are external to the application under test ”for example, how the application reacts to an incoming voice call, SMS, system notification, or alarm. A game application, for instance, should pause when moved to the background by a system event. This sort of behavior is important in reducing processor load as well as in the usability aspects. A guide to enabling good behavior in your application is given in Chapter 4.

It is important to realize that Series 60 devices will typically be running several applications at once, so performance tests should consider this ”high demands on system resources can degrade the performance of other applications.

In cases of low memory, Series 60 may automatically close down background applications to free resources. This may bring its own problems to your test plan!


Recovery testing involves creating test cases for extraordinary situations, such as a dropped connection during data transfer, or a sudden loss of device power. The focus of recovery testing is to ensure that the application handles these situations as responsibly and predictably as possible. Specifically, the tests should aim to verify that there is no data corruption, and where it makes sense, the application should try to revert to the state it was in before the test. For example, it should try to reestablish a lost connection and continue data transfer.

Acceptance Testing

If an application has been written for a particular customer, or group of customers, then acceptance testing may be appropriate. This type of testing involves the end user and is carried out to ensure that the application meets the specifications of the customer. Your application may work perfectly , but if it does not match the requirements of the target users it will fail acceptance testing. Note that if you are writing an application for a particular client, you should be mindful of any criteria they have specified throughout the development process.

As the type of testing necessary will depend upon the nature of the application and the customer, no further information is given here.

Tools and Techniques for Testing

A number of tools are available that you can use to help test your Series 60 application. These tools can be broadly split into static and dynamic categories, and further split into manual and automatic sections.

Static tools test software without executing it. In essence they are code review tools, which some might argue are technically quality-assurance tools, not testing tools, but we will treat them alike here. An example of an automatic static tool is the compiler, which will automatically scan source code and find any coding errors. An example of a manual static tool is a source code comparison tool, where the user can track changes between versions of files. Static test tools should typically be used before dynamic tools, and any errors or warnings addressed.

Conversely, dynamic test tools test software by executing it. An example of an automatic dynamic tool is a code coverage tool, which can be useful in tracking down redundant code. An example of a manual dynamic tool is a debugger, where the user can step through code statements to check an application's behavior.

All testing should be automated wherever possible. This not only eliminates many of the tedious tasks associated with manual testing, but also removes the vulnerability to human error.

Suggested Tools

The following is a list of testing and quality assurance tools that should prove useful in testing your Series 60 applications. Some of these tools are free, while others are commercial and will require licensing. This is not, by any means, an authoritative list, and a quick search of the Internet will doubtlessly reveal many more. However, those listed below generally provide specific support for Symbian OS and Series 60 development.

Static Tools

Code comparison tools are static manual testing tools. They can be useful during testing, as they provide a way to determine the code changes that have occurred between different versions of the software and can therefore help you to pinpoint where errors have been introduced. Many code comparison tools are available, from simple text output diff tools to fully featured GUI tools such as Windiff and the comparison tools found with most configuration control software. One particularly useful commercial tool is Beyond Compare from http://www.scootersoftware.com. It allows comparison between two versions of a directory hierarchy ”enabling you to easily examine any differences in the directory structure, as well as any changes to a particular file.

Table 13-2 provides details of some automatic static testing tools that you may find helpful throughout the development process. The list starts with the compilers and IDEs that support Series 60 development, and moves on to tools that can detect bad coding practice and deviation from the coding standards.

Dynamic Tools

The most frequently used manual dynamic tools are the debuggers from the IDEs listed in Table 13-2. These allow you to step through code and examine variable values, memory addresses and so on, as the application is running. Further information on debugging Series 60 applications is available in the Debugging section of this chapter.

Automatic dynamic testing tools basically fall into two main categories: code coverage and scripted testing.

Code coverage tools can be used in conjunction with other testing tools, providing a method of examining which parts of your application have been covered by your test cases. This is structural testing at its most comprehensive, and it allows you to add further test cases as required for completeness. Code coverage analysis also allows you to trace any redundant code, which is a useful quality-assurance tool.

Table 13-3 provides details of some of the code coverage tools available for Symbian OS. Note that some tools can be tricky to set up for Symbian OS applications, so you should consult the documentation of your chosen tool for guidance.

Profiling is the process of generating a statistical analysis of a program ”showing, for example, the percentage of program execution time used by each function ”and can be very useful in finding inefficiencies in code. Many of the code coverage tools and IDEs available provide support for profiling on the emulator, but profiling can also be achieved on target through the use of RDebug . Details are available from FAQ-0426 on the Symbian Knowledgebase ”see http://www3.symbian.com/faq.nsf/.


Table 13-2. Automatic Static Tools Useful for Testing

Tool

Description

Further Details

Borland C++ Builder 6 Mobile Edition / Borland C++ BuilderX

IDE compatible with Series 60.

Further details are available from http://www.borland.com/mobile.

Metrowerks CodeWarrior Development Studio for Symbian OS v2.5

IDE compatible with Series 60.

Different editions of the CodeWarrior Development Tools are available. More information is available from http://www.metrowerks.com.

Microsoft Visual C++ 6.0 and .NET

IDE compatible with Series 60.

Further details are available from http://msdn.microsoft.com/visualc/.

GNU C++ compiler, gcc

The cross-compiler used for target builds.

The GNU compiler is provided with the SDK, but source code and further information can be found at http://www.symbian.com/developer/downloads/tools.html, if required.

Note that gcc may come up with different warnings than the compiler in your chosen development IDE. You should aim for zero warnings on all compilers.

PC-Lint

Detects potential C++ problems and bad coding practices.

Available from Gimpel Software (http://www.gimpel.com), PC-Lint provides better error information than most compilers, and it can be customized to suppress particular warnings. You should aim to use this on all code at least once.

Information regarding the configuration of PC-Lint for Symbian OS (and hence Series 60) development can be found in FAQ 0449 on the Symbian Knowledgebase ”see http://www3.symbian.com/faq.nsf/.

Leavescan

Checks that code is leave-safe.

The name of any function that may potentially leave must have a trailing L (or LC ) ”in accordance with Symbian OS coding standards (see Chapter 3).

This tool is available free of charge from Symbian. Further information can be found in FAQ 0291 on the Symbian Knowledgebase ”see http://www3.symbian.com/faq.nsf/.

EpocCheck

Checks Symbian OS coding conventions.

A Perl script that checks function naming conventions, that member data is not pushed onto the Cleanup Stack, and that there are no IMPORT_C/EXPORT_C mismatches (a DLL API specification ”see the SDK documentation for details).

This tool is available free of charge from Symbian. Further information can be found in FAQ 0347 on the Symbian Knowledgebase ”see http://www3.symbian.com/faq.nsf/.


Table 13-3. Code Coverage Tools

Tool

Details

Further Information

BullseyeCoverage

Code coverage tool, formerly known as C-Cover.

Further details are available from http://www.bullseye.com/.

LDRA Testbed

Code coverage tool, also provides statistical analysis.

Further details are available from http://www.ldra.co.uk/.

Metrowerks CodeTEST

Code coverage tool, also provides performance and memory analysis, and software execution trace.

Further details are available from http://www.metrowerks.com.

Testwell CTC++

Provides code coverage for Symbian OS emulators, through integration with Microsoft Visual C++.

Further details are available from http://www.testwell.fi/ctcdesc.html.


Scripted testing involves creating test harnesses to test code at the unit level (possibly creating a batch script to run several tests in sequence) or by using tools that automatically drive your application engine or user interface. Such tools may be designed to run from test scripts written purely as a list of text instructions, or they may encompass capture and replay methods to record GUI test input.

Some scripted testing tools are listed in Table 13-4.

Console Applications

Often with large applications, or when porting between different UI platforms, it makes sense to split the application engine into a separate DLL. This is discussed further in Chapter 4. Splitting the engine into its own DLL allows you to write console-based test harnesses to test the engine functionality. This is covered in more detail in the Test Harnesses subsection later in this chapter.

Resource Failure Methods

Series 60 provides methods to assist in performance testing, as mentioned in "Heap Testing" earlier in this chapter ”methods such as __UHEAP_FAILNEXT . There is also a GUI interface to these methods on the debug emulator, which can be accessed using the keyboard shortcut Ctrl+Alt+Shift+P , as shown in Figure 13-1.

Figure 13-1. Resource failure testing tool.


This tool allows you to dynamically set heap, Window Server and file access failures to occur in your application, in order to test how your application copes with such events. Your application should not leak memory as a result of resource failure conditions, and ideally it should be able to recover from such conditions and continue operating. At the very least the user should be warned , and no data should be lost. Note that the settings provided by this tool will affect all applications running on the emulator.

Table 13-4. Scripted Testing Tools

Tool

Description

Further Information

Nokia Testing Suite

A free automated testing tool that allows you to emulate user activities on a Nokia Series 60 device or a Series 60 emulator. As well as user tests, test scripts are provided for performance testing applications ”these tests must pass in order to gain Nokia approval for your application.

This tool uses test scripts specifying a sequence of key presses and special commands, and involves connecting an application on a device with an application on a PC via infrared or Bluetooth. Further information on this tool is available from the Application Testing section of Forum Nokia ”see http://www.forum.nokia.com.

SymbianOsUnit

A generic C++ unit testing framework for Symbian OS applications. Automated unit testing can be accomplished on both emulator and target.

This tool is provided as an open-source project by Penrillian (http://www.penrillian.com) and is available under the GNU Lesser General Public License (LGPL). Further information can be found at http://www.symbian.com/developer/downloads/tools.html.

Mobile Innovation TRY

Executes text-based test scripts on a device (or emulator) that emulate user input. Test output can be validated by text or screenshot comparison.

Further details are available from http://www.mobileinnovation.co.uk/.

TestQuest Pro

Emulates the actions of a manual tester to facilitating testing on an emulator or device.

Further details are available from http://www.testquest.com/.

Digia Quality Kit

A suite of automated testing tools.

Further details are available from http://www.digia.com/.


The keyboard shortcut Ctrl+Alt+Shift+Q will turn heap failure mode off, but Window Server and file access errors have to be turned off using the dialog itself.

There are other debug keyboard shortcuts defined for displaying resources used, but their usefulness on a Series 60 emulator is limited, as they use the (debug emulator only) CEikonEnv::InfoMsg() method to display information, and this is often truncated by the width of the emulator screen. However, a complete list of emulator keyboard shortcuts is provided in Appendix A.

Debug Output

There are several ways to enable debug output in Series 60 applications, but many of these are poorly documented. Besides user methods, such as outputting text to screen using dialogs, notes, and the emulator-only CEikonEnv::InfoMsg() , there are two basic methods: file logging and serial output.

Serial Output

The undocumented class RDebug is defined in e32svr.h and resides in euser.dll . It offers various debugging APIs, such as profiling (as mentioned in "Dynamic Tools" earlier in this chapter), but of particular importance here is the Print() function.

RDebug::Print() is a printf() -style variable argument function, which takes a descriptor and an optional list of parameters. The descriptor may contain just text or include formatting information for the following arguments ”as defined in Format string syntax in the SDK documentation. Some basic formatting characters are given in Table 13-5.

Table 13-5. Basic Format Characters

Format Character

Parameter Type

Description

%d

TInt

Decimal

%u

TUint

Decimal

%x

TUint

Hexadecimal

%b

TUint

Binary

%e

treal

Exponential form

%f

TReal

Fixed form

%g

treal

Either fixed or exponential form, depending on which can display the greatest number of significant figures

%s

TText*

Zero- terminated C-style string in either narrow or Unicode build

%S

tdesc*

Descriptor

%%

 

An actual percentage ('%') character

Note the upper-case "S" for descriptors.


On the emulator, RDebug will print data to the IDE output window. On target, it will send data over the serial port (up to 80 characters sent to COM1 , writing directly to the UART). As serial ports are generally not part of the hardware on Series 60 devices, this still may not fulfill your logging needs! If you have a device or a reference board with an RS-232 serial port, then it is important to ensure that the port is not used for anything else during logging. A HyperTerminal can be used to connect to the device, but be aware that the required communications settings are device specific. Similarly, it may be possible to connect over IR, but again any details are device specific.

Despite its name, RDebug code will exist in all builds, so it is important that any logging code is removed from release builds. A macro is probably the simplest way to ensure this:

 // Note that _DEBUG is automatically defined // for debug builds only. #ifdef _DEBUG #define TRACE(a) RDebug::Print a #else #define TRACE(a) #endif 

Then debug build-only trace code can be added as:

 // Note the double parentheses, which allow the // macro to have a variable argument list. _LIT(KTraceOutput, "Value of iArray[%d] is %d"); TRACE((KTraceOutput, j, iArray[j])); 

File Logging

There are various APIs for file logging on Series 60. You can use the RFile API directly, specifying your own log file format, you can use CLogFile from the Series 60 examples ( \Series60Ex\HelperFunctions ) as used in the TestFrame example ( Series60Ex\TestFrame ), or you can use RFileLogger.

RFileLogger is a simple class that can write descriptor text, printf() -style formatted text, and hexadecimal dumps to a file with optional time and date information. The class is defined in flogger.h and implemented in the library flogger.dll . (Note that the .mmp file statement debuglibrary can be used to specify .lib files for debug-only builds.)

The following code can be used to open the logging session, typically in the ConstructL() of the class you are logging:

 // Connect to server. User::LeaveIfError(iFileLogger.Connect()); // Open log file and leave if there is an error. iFileLogger.CreateLog(KLogDirectory, KLogFile, EFileLoggingModeOverwrite); User::LeaveIfError(iFileLogger.LastError()); // Set timing format. TBool useDate = ETrue; TBool useTime = ETrue; iFileLogger.SetDateAndTime(useDate, useTime); 

If you are logging from multiple files, then the session can be opened in one class and passed by reference to other classes as needed.

iFileLogger is a member instance of RFileLogger , it would be sensible to make it mutable so that you can access the non- const Write() functions within constant methods that require logging.

KLogDirectory is the name of an existing subdirectory of c:\Logs . If the directory does not exist, then no errors will occur in creating, writing to or closing the log, but no actual output will be written . The function RFileLogger::LogValid() can be used to test that the log can be written to, or RFs can be used to create the directory if necessary ”see Chapter 3 or the SDK documentation for details of creating a directory.

KLogFile is the name of the file to log to, and the last parameter specifies the writing mode, either EFileLoggingModeOverwrite or EFileLoggingModeAppend . Note that the log will always be created in the c:\Logs hierarchy ”it is not possible to log to removable media.

SetDateAndTime() specifies whether the date and/or time will be output in each line of the log.

Use of the three basic logging functions is demonstrated in the code snippet below. Write() is used for simple descriptors, WriteFormat() is a variable-argument function for formatted descriptors (as described in Table 13-5), and HexDump() is a hexadecimal dump:

 _LIT(KText, "Plain descriptor"); iFileLogger.Write(KText); _LIT(KTraceOutput, "Value of iArray[%d] is %d"); iFileLogger.WriteFormat(KTraceOutput, j, iArray[j])); _LIT8(KMemory, "hello this is a memory dump"); iFileLogger.HexDump(NULL, NULL, KMemory().Ptr(),KMemory().Size()); 

Be aware that static versions of these functions are also available, but these are much less efficient, because they make a new connection to the file server for each line of logging. They should be used only if the required logging is very infrequent.

The above example would produce output of:

[View full width]
 
[View full width]
14/08/2003 2:36:50 Plain descriptor 14/08/2003 2:36:50 Value of iArray[4] is 10 14/08/2003 2:36:50 0000 : 68 65 6c 6c 6f 20 74 68 69 73 20 69 73 20 61 20 hello this is a 14/08/2003 2:36:50 0010 : 6d 65 6d 6f 72 79 20 64 75 6d 70 memory dump

To close the logging session, use the following code (typically in the destructor of the class you are logging):

 iFileLogger.CloseLog(); // Close the log file. iFileLogger.Close();    // Close the server connection. 

Again, you may wish to use compiler directives to conditionally compile the logging code only for debug builds, or just make use of the fact that nothing will be written if the specific logging directory does not exist. The latter approach requires that you refrain from writing code to generate the logging directory, and that you remember to include the required libraries in all builds.

Note that logging code will cause loss of speed and performance and will also increase the size of the binary.


To read the log file on the emulator, just look in \Epoc32\Wins\c\Logs in the root installation directory of your Series 60 SDK. To get the log file off a target device, use a file browser that supports sending (such as FExplorer from http://www.gosymbian.com), or a connectivity solution that lets you transfer specific files, as may be supplied with your Series 60 device. A file browser can also be used to add/remove the logging directory to enable/disable logging, and may also provide functionality to read small log files on the device.

One further point to note is that disk space can be severely limited on target devices. SysUtil::FFSSpaceBelowCriticalLevelL() can be used to check that there is enough space on disk to write to, but this may be overkill when writing a debug log. However, you should note that your log could end due to running out of space, rather than an error at that point in the code!

Differences of Testing on Target vs. Emulator

Your application may run fine on the emulator, but not on target, or vice versa, and this may be due to inherent differences between the two platforms. Although the emulator provides a full target environment, the fact that there are differences means that it is important to test your applications on target devices, in case any problems may occur. As well as obvious differences, such as availability of specific hardware, there are some more subtle differences, and the main ones are listed below.

The Thread/Process Model

On target, each application runs in its own process. On the emulator, however, each application runs in its own thread (WINS meaning WINdows , Single process). This potentially has an impact on the memory model, as threads share writable memory, whereas processes do not. This means that bad pointers may corrupt another application's (or even the system kernel's) memory on the emulator. Also, it is possible to design applications that share memory on the emulator without using specific shared memory APIs. Such applications will not work on a target device.

Random pointers are likely to result in a Kern- Exec 3 panic. Further information can be found in the SDK documentation.


Similarly, process-relative handles can be used only within the process that created them. For example, when passing a handle across a Client/Server boundary, you must use RHandlebase::Duplicate() to convert it into a valid handle in the other process. This limit will not be imposed on the emulator.

Hardware Limits

An application must function within certain bounds determined by the target hardware. When running applications on the emulator the same boundaries may not apply; for example, you would not need to be concerned about whether you have enough disk space. In this subsection the various constraints that you need to be consider are highlighted.

Heap and Stack Sizes

Target hardware supports different (specifiable) stack and heap sizes for applications, whereas the emulator uses default values that may differ from those on hardware. The default heap size is likely to be much bigger on the emulator than on target! Note that it may be possible to set the emulator stack size used in your IDE. See Chapter 2 for more information on setting stack and heap sizes.

If your application exceeds the available stack size, a __chkstk error will occur when linking during a target build. You should look to split up functions causing the problem or try increasing the stack size. You should also be aware that recursive functions can still blow the stack at runtime ”this will lead to a Kern-Exec 3 panic.

Machine Word Alignment

Series 60 devices use 32-bit ARM chips with RISC architecture for cost and power efficiency. This means that all memory words must be aligned to 32-bit machine word boundaries.

Dereferencing a pointer whose address is not a multiple of 4 will result in an access violation.


32-bit (or larger) struct and class members will be 32-bit aligned by the compiler, with appropriate padding, so sizeof may return a larger value on target than on the emulator. For example, the following structure would have a size of 6 bytes on the emulator and 8 bytes on target:

 struct SInfo    {    TText8 iText;     // Stored at offset 0, size 1 byte    TText8 iText2;    // Stored at offset 1, size 1 byte    TInt32 iInteger;  // Stored at offset 4, size 4 bytes    }; 

iText will lie on a 32-bit boundary, iText2 will be stored in the same 32-bit word as iText and iInteger will be aligned to the next available 32-bit word.

C-style arrays will also be aligned, but are rarely used in Symbian OS. However, code such as the following would generate an access violation the second time through the loop on target, as p would not be a 32-bit multiple, despite this code's being valid on the emulator:

 TText8 array[200]; for (TInt i = 0; i <= 196; i++)    {    TInt* p = (TInt*)array[i]; // Needs a cast.    // Four bytes from the array makes one integer.    TInt n = *p;    } 

Any code that casts one type of packed structure to another might need to be implemented using Mem::Copy() ”for example:

 TText8 array[200]; for (TInt i = 0; i < 196; i++)    {    // Really a TAny*, so no cast needed!    TAny* p = array[i];    TInt n;    Mem::Copy(&n, p, sizeof(TInt));  // Copy byte by byte.    } 

Note that in code without casts, the compiler will ensure machine word alignment, so no special coding is required.

Out-Of-Disk Errors

The emulated disk space available is constrained only by the available space on your PC's disk. It is important that applications do not assume that sufficient disk space will be available for every write that they make, as it would be on the emulator. Disk space on a device can be quickly used up.

Writable Static Data

The emulator will allow non-const static or global data in applications or DLLs ”whereas the Petran tool will give an error at the link stage, when building for target. Petran converts windows (Portable Executable format) executables to ARM format, and although it is possible to "eliminate" such errors by specifying the “allowdlldata flag, executing code containing writable static data will cause an immediate panic on target devices! To implement writable static data you should use Thread Local Storage (TLS) instead ”see the SDK documentation for more details.

Note that "const" C++ objects are not constant until their constructor is called, so these also count as writable data.


For further information about Petran and how to locate writable static data in your application, consult FAQ-0329 on the Symbian Knowledgebase ”see http://www3.symbian.com/faq.nsf.

Timing Differences

Most code will generally run faster on the emulator than on a target device because the processor speeds are usually much higher on PCs.

Furthermore, the standard clock tick interval is different: the interval is 1/10 second on the emulator and 1/64 second on target. Any RTimer::After() requests will be rounded up to the corresponding resolution, and this can lead to subtle timing differences. These may not be a problem for general timeout purposes, but they can affect applications needing higher timing resolutions , such as for animations.

Although perhaps obvious, it is important to understand that there is a minimum wait of one tick, so the minimum wait time is 6.4 times larger on the emulator than on target.


Directory Differences

There are some differences between the directories used on the emulator and a target device. First, note that there are actually two emulators: the debug and the release emulator. Each has its own ROM ( z: ) drive, but they share all writable drives .

Generally, when developing, you will be using the debug emulator, and this uses a path such as \epoc32\release\wins\udeb relative to the root of your SDK. Within this directory will be the emulated z: drive, the emulator itself ( epoc.exe ) and any DLLs required by the emulator. The first main difference is that DLLs on the emulator typically exist outside of the emulated drive system. In other words, the emulator DLLs are stored on your PC at a level higher than the emulated z: drive, so they cannot be seen by the emulator's file system. On target such DLLs will be stored in \system\libs on the relevant drive.

The shared writable emulator drives ”for example, c: ”exist in a directory such as \epoc32\wins relative to the root of your SDK.

The actual paths used depend on the IDE you have installed. For example, wins may be replaced by winscw or winsb . If you are using the release emulator rather than the debug emulator, then replace udeb with urel in any paths that specify it.


The second main difference is that the emulator build tools build applications on the z: drive, as the tools were originally designed for building the standard applications that are supplied on your device's ROM drive. Third-party applications will be installed on one of the writable drives on the device ”typically the Flash Filing System (FFS), or c: drive; but also possibly on a removable media drive, such as e: (if available). Your code should not make any assumptions about the drive that it is installed on. CompleteWithAppPath() is one way to find out the drive your application is installed to at runtime ”see the SDK documentation for further details.

GUI applications run as polymorphic DLLs on both the emulator and target devices. However, Symbian OS executables run differently on each. On target, executables are typically stored in \system\programs on the applicable device drive. Emulated executables run as a single Windows executable, which includes the emulator code. Hence these are typically stored in the same directory as the emulator ”for example, \epoc32\release\wins\udeb ”which is above the emulated file system.

Server programs run as an executable on target, but as a DLL on the emulator. The macro __WINS__ can be used to conditionally compile any code that is relevant only to emulator implementations , such as E32Dll() .

Test Harnesses

As mentioned earlier (in the "Console Applications"), it often makes sense to split the application engine and GUI, and this allows you to write console-based unit tests. Not only are these much simpler than UI-based test applications, they can also be portable across different UI platforms. Furthermore, they allow for automatic testing, where a batch of executables are run, and their output (in text format) is compared to a standard set of results using a diff -style tool.

The class Console found in e32base.h can be used to create a CConsoleBase -based console application (see e32cons.h ). Although this provides a Printf() method for drawing text to the screen, there is a better way to create console-based test executables ”the poorly documented RTest class, and this is covered in some detail below.

Also of note, the TestFrame example, which comes with the SDK, demonstrates a Series 60 GUI-based test harness. This allows you to test that an application panics, leaves and handles invariants correctly by locally overriding the standard User calls, although its use requires manual intervention. Further details of this are given in the SDK documentation ”it will not be covered in any depth here.

RTest

The console-based test class RTest is defined in e32test.h . It provides logging to console screen and file, but it cannot be used for testing GUIs, only application engines. In addition, there can be problems if you are trying to write tests for communications, because connection dialogs cannot be displayed in a console application.

RTest is constructed with the title of the test, which is used to identify it in output messages. A call to Title() will output the text passed into the constructor, plus the operating system build version.

 _LIT(KTestName, "My test"); RTest test(KTestName); // Note that, as with all console applications, you // will have to create your own Cleanup Stack. CleanupClosePushL(test); test.Title(); 

Tests are automatically numbered ”and the numbering can be nested to multiple levels. The Start() method opens a new nested level and sets the subtest number to 1 . A call to Next() will increment the number at the current level, and a call to End() will close the nested level. Nested subnumbers will appear separated by a dot (" . ") ”for example, 001.02.01 .

Each Start() must have a matching End() ”if there is no closing End() call, then the " test completed " notification will not be displayed; if there are too many calls to End() , then a panic is generated and " End without matching Start() " is displayed.

Tests are carried out using operator() ”the expression in the parentheses is evaluated, and if false, then an error message will be printed and the test will panic. For example, code such as:

 _LIT(KTestA, "Test A"); _LIT(KTestB, "Test B"); test.Start(KTestA); // 001 test(ETrue);        // Used to verify how far you have got test.Next(KTestB);  // 002 test(aBool); test.End(); 

will produce output similar to the following if aBool is true:

 RTEST TITLE: My test 1.02(320) Epoc/32 1.02(320) RTEST: Level  001 Next test - Test A RTEST: Level  002 Next test - Test B RTEST: SUCCESS : My test test completed O.K. 

or similar to this if aBool is false:

 RTEST TITLE: My test 1.02(320) Epoc/32 1.02(320) RTEST: Level  001 Next test - Test A RTEST: Level  002 Next test - Test B RTEST: Level  002 : FAIL : My test failed check 1 at line Number: 58 RTEST: Checkpoint-fail 

Naming your RTest instance " test " has an interesting side effect: a macro defined in e32test.h expands test(x) to test(x, __LINE__) . Hence all operator() calls are automatically expanded to include the line number of the file they reside in. (The constructor is overloaded to ignore this line number.) Note, however, that this can sometimes run against coding standards ”in which case simply add the __LINE__ preprocessor directive into your code by hand.


All output from the executable is printed to the console window. On the emulator, it will also be printed to your IDE's debug window (if applicable) and appended to the file Epocwind.out in your temporary directory (as set by the PC system environment variable TEMP ”for example, c:\Temp ). Note that this file is always appended to, so when running batches of tests, it is important to delete any existing file first. All tests will then be logged to the same file.

On target, as with the RDebug class (see "Serial Output" earlier in this chapter), any output will also be sent out across the serial port (if available), where it can be logged to file via a connected HyperTerminal.

Typical tests would be to check expected return values from functions, or to validate the internal state of some data. If a test is designed to leave, then it can be useful to add a test(EFalse); line after the line that is expected to leave. If the code reaches this point, then it will panic.

Other methods supported by the RTest class include:

  • Printf() , which outputs user-formatted text. Note that new lines are not automatically generated.

  • Two Panic() overloads, which call User::Panic() as expected.

  • Getch() , which gets keyboard input and can be useful for pausing the console in manual tests or when running on target without connection to a HyperTerminal.

  • SetLogged() , which can be used to turn output on or off, and Logged() , which can be used to determine whether or not logging is turned on.

The console is closed using the Close() method. If this method is left out, the test log will not show any errors, but the executable will panic on exit.

 CleanupStack::Pop();    // test test.Close(); 

Out-Of-Memory Testing

The test harness code shown so far is not actually taken from a real example. It is designed purely to show how the API works. However, the Testing example shows how a real out-of-memory (OOM) test harness can be created.

The code shown below is not the complete source, but it should be enough to demonstrate the principles. This brings together what you have learned so far about RTest and the heap checking and failure macros.

The basic mechanism for OOM testing is using __UHEAP_SETFAIL to create heap allocation errors. Using a loop, you can comprehensively test how a given code segment responds to every possible heap allocation failure, increasing the failure interval on each iteration until the code finally completes without failure ”if the code segment makes X memory allocations , then the loop must repeat X times. This sort of testing proves that your code will not orphan resources or otherwise corrupt data, due to OOM conditions.

 RTest test(KTestTitle); ... TInt error = KErrNoMemory; TInt failAt = 0; while (error != KErrNone)    {    failAt++; // Increase the failure interval. // Set the failure interval and start nested heap checking.    __UHEAP_SETFAIL(RHeap::EDeterministic, failAt);    __UHEAP_MARK; // Run the test code in a trap harness.    TRAP(error, DoTestL());    ... // End nested heap checking and reset (turn off) // heap failure.    __UHEAP_MARKEND;    __UHEAP_RESET;    ...    // Test that we have no unexpected errors.    // This will Panic and break out of the loop on failure.    test((error == KErrNoMemory)  (error == KErrNone));    } 

Within each iteration of the loop, the failure interval is increased. By setting the type to EDeterministic , you know that the allocation will fail at the failAt interval specified. A nested heap check is created using the __UHEAP_MARK and __UHEAP_MARKEND macros, and the code to be tested is called from within a trap harness.

If the code being tested leaves, then we can test that the error returned is either: KErrNoMemory (as the allocation has failed) or KErrNone (as the failure interval is now greater than the number of allocations in the test code, hence the test has passed).

If the code being tested leaks any memory, then the heap check should pick it up. Remember, though, that the code being tested may be designed to reserve resources ”particularly in the case of testing class methods. For example, class members may be allocated that would be deleted in the class's destructor, not if the method leaves. In such cases the __UHEAP_MARKENDC() macro could be used to handle the nondeleted resources, or the resources should be deleted in the test harness before the __UHEAP_MARKEND macro is called.

A further complication occurs if your test code uses a server session that caches memory. For example, if you create a CFbsBitmap object, then this will use an RFbsSession , and that will cache filename data on the heap. This memory is not freed immediately after the CFbsBitmap is deleted, so the heap check will fail. One possible solution might be to allocate and delete the CFbsBitmap before the start of the heap check, in order to preinflate the server's memory cache. The server will then use the existing cache for the next CFbsBitmap object you create, and you will not experience an allocation/deallocation mismatch inside your test.

The test loop will exit when either the test is passed ( error == KErrNone ) or RTest or the heap checking macros panic. With the code shown above, there is a slight danger of the test getting into an infinite loop if the test code still runs out of memory with an arbitrarily large value of failAt . In this case, you may do better to use a for loop with an arbitrarily large end value, and break if error equals KErrNone.

Any leave codes can be checked by looking at the value of error in a debugger. Note, however, that other side effects of OOM conditions may cause panics or access violations elsewhere in the code. If the test does not pass, then you will need to examine the code being tested in a debugger to establish the exact cause of failure.

Also note that Cleanup Stack code may also cause heap allocation failure, so bear this in mind if counting the allocations to find the erroneousness line. (You will generally find that it is quicker to step through in a debugger!)

Although it is possible to run such a test harness on target (and there may be occasions where code will react differently on target to OOM conditions), the current Series 60 SDKs do not include target debugging libraries. In other words, you cannot build for armi udeb . The heap checking and failure tools will not work in release builds, so no OOM testing will actually occur, although the test will probably still appear to pass.

Running Test Executables

Series 60 provides no default mechanism to run executables on target. Usually this type of application is limited to system servers, so to run an executable on target, you need to:

  • Package your executable up in a .sis file (see Chapter 2 for details) and install it on your device. Executables sent directly to a device will be blocked from running by the system security measures.

  • Locate an application that can launch executables, such as the file browser FExplorer from http://www.gosymbian.com, or the ExeLauncher utility provided with this book ”this can be downloaded with the example applications as shown in the Preface.

The ExeLauncher utility allows you to run any executables in the directory c:\EMCC\Exes , so you must make sure that your .sis file installs them there. Also, all executables must have the suffix .exe .

Then, to run an executable, simply browse the required file using one of the tools mentioned above and launch it using the selection key or relevant menu item.



Developing Series 60 Applications. A Guide for Symbian OS C++ Developers
Developing Series 60 Applications: A Guide for Symbian OS C++ Developers: A Guide for Symbian OS C++ Developers
ISBN: 0321227220
EAN: 2147483647
Year: 2003
Pages: 139

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net