Multiple aspects need to be taken into account when building business-critical RPA processes. Just making the robot run is only a part of the bigger picture.
“Traditional RPA robot implementations often encounter challenges when processing multiple input data items. If a robot has to process 100 data items and 7 items fail, how does the robot handle and report those failures? Simply rerunning the robot could result in duplication of data processing so failures have to be painstakingly dealt with manually.”
Antti Karjalainen, Co-Founder and CEO, Robocorp
To solve similar issues, Robocorp, an open-source RPA platform, recently announced the release of a game-changing feature called “Work Data Management” — empowering business users to operate and manage their automated processes while enabling developers to build robust and scalable solutions. This dynamic feature is designed to support both simple and complex use cases with heavy workloads across a range of verticals including Fintech, E-commerce, Healthcare, and Insurance.
Work Data Management provides the ability to build robust, cost-effective, high-performance business-critical automation that can efficiently process large workloads, using its built-in parallel processing functionality. Splitting up large workloads with work items into two steps — (the “producer-consumer” model) and creating queues for the robots to work in parallel results in increased automation efficiency and productivity.
The core concept of Work Data Management is Work Items which are the entities used in the Control Room to store any data meant to be processed by robots. Work items can be individual pieces of data that your process handles — things like invoices, URLs, or customer support tickets. Each work item can contain both input metadata for robots processing them as well as output data and output files.
Work Data Management is far more intelligent and event-driven. The Control Room “actively pushes” work items to bots and manages runtime environments to launch automatically and scale dynamically according to the workload and available resources. This provides efficient parallel processing resulting in high throughput with fast response times.