Writing production ready batch processes is not typically viewed as the glamorous side of software development. It’s often grouped with acceptance testing and documentation as tasks to be avoided during the development process. However, solid batch production code will perform well, handle disruptions well, and allow staff to enjoy their time outside of office hours. Additionally, software enhancements and system upgrades can be applied much more safely knowing that the production batch processes are stable.
No matter what the language, tool or platform, the following guidelines must be considered when writing safe production batch processes.
Restart Without Intervention
This may seem obvious, but batch processes should be designed to be restarted without manual intervention. Whether file system or database, any batch process should cleanup or remove any data or files it generates before it starts. It’s easy to test this – just run the program repeatedly and make sure the output is the same. Note that this also includes persistent logs, which the process should attempt to clear in the event of a re-run.
Clean Up at the Beginning
This aligns well with the first point. It is important that any temporary files or data created in the process be removed at the start of the process rather than at the end of the process. Cleaning up temporary files at the end of the process makes troubleshooting extremely difficult as the bread crumbs left by the process have been neatly removed. If possible, clean up all temporary files with one step at the beginning of the process stream from the last processing stream, thereby making the intermediate files or data available for troubleshooting between process runs.
Segment Work by Function
Logically segmenting the work to be performed by functional component allows easier debugging, enhancement, and troubleshooting. Grouping too many components together into one process can make troubleshooting difficult. Often manual exits in scripts or commenting out undesired code are required to troubleshoot a segment of a process. The ability of a process to restart without intervention can be used as a guideline to determine if a process is properly segmented. If it’s very difficult to automate the process restart feature, maybe the program should be broken up into several smaller processes.
Right Tool for the Job
Some tools and languages support external functions or procedures where code not native to the tool can be used to execute an environment-specific set of code. While it may seem logical to use the same primary language throughout, error handling and language or script-specific features can be difficult to pass back from external procedures to the primary batch process tool. A better solution to use a scheduling tool that supports several different languages, and scripts allowing the external function to be executed in its native environment.
Batch process performance has two key components. First, it must perform adequately such that it does not unduly delay the completion of key process outputs or SLA’s with business partners. Second, it must share resources appropriately, and use them efficiently in the environment during its normal processing window. If a large process takes so much of the system resources that other processes are negatively impacted and potentially miss their own SLA’s, that process must be tuned to make more efficient use of the resources available at the time of execution.
It would be foolish to think that good documentation techniques are not necessary for production batch processes. Comments in scripts and code should be frequent, and should clearly describe the purpose of each segment, key line, or formula. Comment blocks at the beginning of each script or source module should be used to describe the purpose of the program and any revision history. Use of file or table logging for key steps in the process should be incorporated to diagnose process failure, or assist with performance troubleshooting. If performance is critical and variable depending upon processing conditions, input data, etc.; consider logging the start and stop points of key processes to a database table so that performance can be monitored over time (if such monitoring is not available in other tools).
Combining these recommendations with good programming techniques and a solid infrastructure should reduce batch processing failures. This will reduce support costs while allowing IT staff to focus on new development to meet the dynamic needs of the organization.