Unit Testing

From Opentaps Wiki
Jump to navigationJump to search

How to Write Unit Tests

opentaps 1.0

For opentaps 1.0, you would write a set of Junit tests in a class, then define it in an XML testdef file like this:

<test-suite suite-name="entitytests"
    <test-case case-name="security-tests">
        <junit-test-suite class-name="com.opensourcestrategies.crmsfa.test.SecurityTests"/>

You can define multiple tests per testdef xml file. Then, add the testdef file to your ofbiz-component.xml, like this:

   <test-suite loader="main" location="testdef/crmsfa_tests.xml"/>

Then, when you do

  $ ant run-tests

your tests will be run.

opentaps 0.9

In opentaps 0.9, you would write your Junit tests class and add your it to the base/config/test-containers.xml file, in the "junit-container" at the bottom, like this:

    <container name="junit-container" class="org.ofbiz.base.container.JunitContainer">
        <property name="base-test" value="org.ofbiz.base.test.BaseUnitTests"/>
        <property name="entity-test" value="org.ofbiz.entity.test.EntityTestSuite"/>
        <property name="service-test" value="org.ofbiz.service.test.ServiceEngineTests"/>
        <property name="crm-security" value="com.opensourcestrategies.crmsfa.test.SecurityTests"/>  <!-- your unit tests -->
        <property name="usps-test" value="org.ofbiz.shipment.thirdparty.usps.UspsServicesTests"/>
        <property name="jxunit-test" value="net.sourceforge.jxunit.JXTestCase"/>

Then you would do

 $ ant run-tests

Your tests will run alongside the existing OFBIZ test suites.


  • Use a "test" delegator to point your tests to a separate database, and make sure it is defined in framework/entity/config/entityengine.xml is set to the right database.
  • The opentaps tests are commented out in hot-deploy/component-load.xml by default so don't forget to activate them.

Where are the Unit Tests?

All opentaps unit tests are located in hot-deploy/opentaps-tests

There are also a small number of unit tests from ofbiz in their respective modules, such as framework/entity for the entity engine unit tests.

Setting Up For Unit Testing

We recommend that you create a separate database on the same database server for testing purposes and install all demo data into the testing database. Let's say that this database is called "opentaps_testing". Then, edit the file framework/entity/config/entityengine.xml and define opentaps_testing as a new datasource, called "localmysltesting" or "localpostgrestesting". Next, initiate the demo data into the testing database by editing the default delegator:

   <delegator name="default" entity-model-reader="main" entity-group-reader="main" entity-eca-reader="main" distributed-cache-clear-enabled="false">
        <group-map group-name="org.ofbiz" datasource-name="localXXXtesting"/>

Then do an

  $ ant run-install

to install all the seed and demo data into the testing database. Then you can edit the default delegator back to your original delegator, and set the test delegator to the testing database:

    <delegator name="test" entity-model-reader="main" entity-group-reader="main" entity-eca-reader="main">
        <group-map group-name="org.ofbiz" datasource-name="localXXXtesting"/>

All unit tests should be run to use the test delegator. This can be done by instantiating the the "test" delegator by name and using that delegator to instantiate a dispatcher. Or you can just write a test suite which extends the OpentapsTests base class, which does it for you.

If you need to modify port settings for the testing instance, you should edit the file framework/base/config/test-containers.xml.

Unit Testing Strategies

These are some strategies for unit testing:

  • Transaction comparison - Compare the transaction produced with a sample transaction, possibly pre-loaded into the system. For example, posting a paycheck to the ledger and then comparing with test data of a correct ledger transaction to make sure that they are equivalent. Equivalence is a very important concept: it is not possible that two sets of transactions are identical, since at a minimum they would have different ID numbers, and they would probably reference other transactions with different IDs. For example, each order would have a different orderId and different inventory item Ids reserved against it. However, two orders may be considered equivalent if they have the same set of items, prices, shipping methods, customer, addresses, tax and promotional amounts, etc.
  • State change - Compare the state of the system before and after a transaction has occurred. For example, check the inventry of an item, then ship an order, and check the resulting inventory to make sure that it is correctly decremented. This could get very complex: Shipping an order could cause customer balances, invoices, ledger postings, and inventory changes. Multiple tests could be run off the same transaction event.
  • Absolute state check - At all times, certain relationships must hold. For example, the sum of all debits must equal sum of all credits.

Tests should be written against the services that create the original data. For example, if you are writing tests against CRMSFA activity, you can use users from the demo data set, but you should use the CRMSFA activity services to create or update your activities. Otherwise, if you create those activities with some other method, future changes to the services to create activities will not be covered by your unit tests.

Tests should be run against a dedicated testing database with demo and seed data rather than production data. Therefore, the tests generally should set up their own initial conditions and run to completion, but they do not need to "tear down" and remove all data created by the tests. (This would be very impractical: imagine creating and shipping an order. To tear it down would involve reverting order data, customer information, inventory data, shipment data, invoices and payments, and accounting entries.) A good test for the tests is that if you ran the test suite in succession multiple times, they should pass during the second and third runs as well as the first run.

A Unit Testing Tutorial

IMPORTANT: Each unit test method must start with the word "test" -- it must be called testXXX(), not tryXXX() or verifyXXX().

Now let's walk through a particular unit test and see how it works. The one that we're looking at is the ProductionRunTests.java's testProductionRunTransactions method. This particular test verifies that a standard manufacturing process is working correctly and checks the inventory results and financial statements. As you read through the code, you will notice that it does the following

  1. Sets up by first receiving the raw materials (MAT_A_COST and MAT_B_COST) into inventory
  2. Checks the initial state by getting the GL account balances and the initial inventory quantities, both ATP and QOH
  3. Runs through the production run
  4. Checks the final state by getting the GL account balances and the inventory quantities for raw materials and the finished product.
  5. Verify the following:
    1. The change in inventory quantities is correct: raw materials are used, so their quantities are reduced, and finished product's quantity is increased because it is produced.
    2. The change in GL account balances are correct: inventory value increases and are offset by raw materials and manufacturing expenses.
    3. The financial statements are in balance at all times.
    4. The financial transactions created by this production run is in agreement with the reference transactions MFGTEST-1, -2, -3. This is done by finding all new financial transactions after the production run has been begun, as they should only be generated by the production run.
    5. The unit value of the finished product is correct.

Along the way, the tests will verify that all the services are run correctly and return success as well.

The test case uses the classes InventoryAsserts and FinancialAsserts to obtain information and run tests on the inventory balances and financial statement values. This is a common "delegation" pattern to separate the code for testing assertions to new classes. It also uses methods such as assertMapDifferenceCorrect and assertTransactionEquivalence which are inherited from the OpentapsTestCase and FinancialsTestCase base classes.

For comparisons of GL account changes, we have set a convention so that increases in the balances of debit GL accounts are positive, and increases in the balances of credit GL accounts are negative values. So, if a transaction caused an inventory GL account to increase by 100 and accounts payable to increase by 100 as well, the GL account changes would be {INVENTORY: 100, ACCOUNTS_PAYABLE: -100}

Note that in this case we used the receiveInventoryProduct service to receive inventory, called the various production run services, and then compared the results versus pre-stored AcctgTrans and AcctgTransEntries. With other tests, such as those for invoices and payments, we have used pre-existing Invoice and Payment records stored in the hot-deploy/opentaps-tests component and merely changed their status codes to verify the results. This brings up an interesting question--what should be done during the service, and what with existing data?

Our recommendation is this:

  • What you are testing must be done with the services that you would normally use as part of your application. For example, when we are testing GL postings, the user does not actually call the "postInvoiceToGl" service anywhere on the screen. Instead, she would set the invoice status, and the services would run behind the scenes. Therefore, it would not do to run the "postInvoiceToGl" test, or worse, manually create the results of that service in the database. Instead, we should be calling the service to set invoice status, which is the same one accessed via the controller.
  • For everything else, do whatever is easiest to set up the pre-conditions for testing. Calling receiveInventoryProduct is pretty easy compared to storing InventoryItem and InventoryItemDetail, so we decided to use that. Calling createInvoice and createInvoiceItem would have been many lines of code, so we just stored an invoice in the database.

Creating Reference Data Sets

In many tests you will see comparisons against pre-stored AcctgTrans and AcctgTransEntries. These are reference data sets which are used to compare actual transactions' results and make sure that they are consistent with the reference. Reference data sets are created in the following way:

  1. Run through a set of business transactions, such as creating an invoice and marking it as READY.
  2. Go to Webtools > XML Data Export and select the entities to export. In this case, it might be the Invoice, InvoiceItem, AcctgTrans, AcctgTransEntry entities. Export them either to a file or to a browser and copy them to a file.
  3. Edit the file of transactions and change the following:
    1. All IDs from the system-generated 100xx to something like "XXX-TEST-###" so that they would not cause primary key conflicts.
    2. For AcctgTrans, change the glFiscalTypeId of all the AcctgTrans to "REFERENCE" from "ACTUAL" so they would not interfere with actual records.
    3. Remove references to entities which would not be part of the reference set. For example, the invoice might be part of the reference set, but workEffortId, inventoryItemId, etc. referenced by AcctgTransEntry would not be.
    4. Test by loading the new entity XML into your dedicated testing database. It should cause no conflicts.
    5. Add it to the opentaps-tests component's ofbiz-component.xml so that it would load for future tests and commit it!

Running a Unit Test from Beanshell

After you have written a lot of unit tests, running all of them could take a long time. Fortunately, you can use beanshell to run just one unit test at a time to speed up your development. To do this, you would need to telnet into your beanshell port, then instantiate an object of the unit tests class, and run your test method:

si-chens-computer:~ sichen$ telnet localhost 9990
Trying ::1...
Connected to localhost.
Escape character is '^]'.
BeanShell 2.0b4 - by Pat Niemeyer (pat@pat.net)
bsh % import org.opentaps.tests.purchasing.MrpTests;
bsh % mrpTests = new MrpTests();
bsh % mrpTests.testMrpPurchasedProduct();
bsh %

If the test succeeded, you would see no messages on your beanshell console. If it failed, you would see a stack trace. In both cases, you should see the log messages in runtime/logs/ofbiz.log or runtime/logs/console.log

To make your life even simpler, you use File:Tests.bsh.zip as a starting point and modify it for your test, then just call it with the source method from the beanshell console:

bsh % source("tests.bsh");

Writing Unit Tests from Bean Shell

You can speed up the process of writing unit tests by developing them in beanshell first, and then putting them into a Java unit test method when you're done. This saves you the time of repeatedly recompiling and restarting opentaps while you are developing the unit tests.

To do this, first you will need a Java tests class. If you do not already have one, you can create one with just setUp() and tearDown() methods which extends a base test case, like this:

public class AccountingTagTests extends FinancialsTestCase {

    public void setUp() throws Exception {

    public void tearDown() throws Exception {

Then, you can set up a bean shell script for your test. The bean shell script must simulate the behavior of actually running a test, which means initializing the test class first:

t = new org.opentaps.tests.financials.AccountingTagTests();


Note that you should do a setUp() and tearDown() first to make sure that the test has been properly torn down, before running setUp() again to prepare your bean shell test code. Now you can put your testing code:

String supplierPartyId1 = t.createPartyFromTemplate("DemoSupplier", "Test supplier 1 for accounting tags invoice and payments test " + UtilDateTime.nowTimestamp());
String supplierPartyId2 = t.createPartyFromTemplate("DemoSupplier", "Test supplier 2 for accounting tags invoice and payments test " + UtilDateTime.nowTimestamp());
//... more testing code

And finally, at the end, tear it down again:


Save this test script, and then follow the instructions above to run it from bean shell.

While writing your unit tests from beanshell, you can "shorten" it to run just a part of it by using tearDown() and return. For example, here I decided to just has one method first inside of a pretty long test:

Map invoiceItemTypeGlAccounts = t.getGetInvoiceItemTypesGlAccounts("Company", UtilMisc.toList("PINV_FPROD_ITEM", "PITM_SHIP_CHARGES", "PINV_SUPLPRD_ITEM", "PINV_SPROD_ITEM"));
//...   More testing code which won't be run

You can also "cheat" and give yourself the values that you should be testing for:

Debug.logFatal("******** finalBalances_CONSUMER *******", "");
t.printMapDifferences(initialBalances_CONSUMER, finalBalances_CONSUMER);
Debug.logFatal("******** finalBalances_GOV *******", "");
t.printMapDifferences(initialBalances_GOV, finalBalances_GOV);

Note that print() will put the results back on bean shell console, while the Debug.log methods will put it on the log file.

When you get your tests to work, you can put it into the Java test case as a new test method:

  public void testSupplierInvoicePaymentCycle() throws GeneralException {
        // creates suppliers and organization
        String supplierPartyId1 = createPartyFromTemplate("DemoSupplier", "Test supplier 1 for accounting tags invoice and payments test " + UtilDateTime.nowTimestamp());
        String supplierPartyId2 = createPartyFromTemplate("DemoSupplier", "Test supplier 2 for accounting tags invoice and payments test " + UtilDateTime.nowTimestamp());
        String organizationPartyId = createOrganizationFromTemplate("Company", "Test organization for accounting tags invoice and payments test " + UtilDateTime.nowTimestamp());
        // ... more testing code

Converting the bean shell code to Java is fairly simple. Just copy and paste your bean shell code between the setUp() and tearDown() into the body of your Java method, and then make the following changes which are usually suggested by IDEs:

  • type and typecast all your variables
  • make sure that doubles are expressed as 1.0 instead of 1
  • fix references to t. Since beanshell is not inside a particular class, we had to refer to our test as t and access its methods through t. Once we put the test code inside of the Java method of a test class, we can access the methods of the Java test class directly, so:

becomes just



new FinancialAsserts(t, organizationPartyId, demofinadmin);


new FinancialAsserts(this, organizationPartyId, demofinadmin);

And that's pretty much it!

Debugging Unit Tests with IntelliJ

The default task for tests will do a global compile. To skip this, you can redefine the run-tests target in build.xml as follows,

    <target name="run-tests">
        <java jar="ofbiz.jar" fork="true">
            <arg value="test"/>

Using a debugger can help speed up development of the unit tests. You can enable debugging by specifying the JVM arguments for your debugging system. For instance, if you have the IntelliJ IDE, the run-tests target becomes,

    <target name="run-tests">
        <java jar="ofbiz.jar" fork="true">
            <jvmarg value="${memory.max.param}"/>
            <jvmarg value="-Xdebug"/>
            <jvmarg value="-Xnoagent"/>
            <jvmarg value="-Djava.compiler=NONE "/>
            <jvmarg value="-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005"/>
            <arg value="test"/>

You should be able to attach the debugger immediately after running ant run-tests. Don't forget to recompile the component where your tests live.

Another tip is to comment out all unnecessary test suites. Unfortunately, this involves searching every ofbiz-component.xml. One way to find them, if you're on a POSIX OS, is to use find,

$  find . -name ofbiz-component.xml -exec grep test-suite {} \; -print

Dealing with Concurrency

Since each test method runs in a separate thread, there may be concurrency issues when you are using demo or test data in a test. For example, if you are testing whether a certain number of commission invoices are being generated for sales invoices, another thread could be generating additional ones at the same time. This leads to unexpected results which can seem very mysterious.

To avoid concurrency issues, ensure that your tests are using generated data for comparison purposes. For example, rather than using DemoCustomer as the target of a sales invoice, you can create a copy of DemoCustomer,

    String customerPartyId = createPartyFromTemplate("DemoCustomer");
    // create invoice for customerPartyId

From this point, any data that relies on the partyId is sure to be specific to that test only. An example would be when we're checking the customer balance against customerPartyId. If we were checking the balance of DemoCustomer, then we might be thrown off if another thread happens to create an invoice for DemoCustomer at the same time.

Unit Tests and MySQL

MySQL does not store timestamps with a greater than 1 second precision. Therefore, it could lead to some primary key violations when several attempts to store data with a timestamp as part of the primary key take place within the same second. For example, if you update the status of an invoice several times during the same second in a test, the InvoiceStatus could experience primary key violations because one of its primary key fields is a timestamp. To work around this problem, put pauses inside of your test:

   pause("product average calculation");

When runnig a long time test in MYSQL that might cause lock wait timeout error, so we need change the MYSQL innodb_lock_wait_timeout parameter to avoid this issue. See mysql tips.

Running and Monitoring Unit Tests

The opentaps unit tests will take a while to run. To monitor them, pipe the output to a file:

$ ant run_tests > tests.log

Then, you can use grep to look for tests that have started and ended:

$ grep 'JUNIT (start)' tests.log
$ grep 'JUNIT (end)' tests.log

and count how many have been run:

$ grep 'JUNIT (end)' tests.log  | wc -l

When they're all done you can look for the results in runtime/logs/test-results/

$ ls runtime/logs/test-results/