Home Automation and Cognitive Technology at USPTO

Automation and Cognitive Technology at USPTO

by sol-admin

The United States Patent and Trademark Office (USPTO) is the federal agency for granting U.S. patents and registering trademarks. USPTO uses automation and AI to improve operational efficiency and empower their highly-skilled Examining Corps. Additionally, they are automating various processes to lighten the manual load on their Examiners.

Here is a brief interview with Timothy Goodwin, Deputy Director, Office of Organizational Policy and Governance at the United States Patent and Trademark Office (USPTO) where he shares how they are leveraging automation and cognitive technology at America’s Innovation Agency.

How are you leveraging automation at USPTO?

The depth and breadth of which automation technologies are being leveraged within USPTO are vast. It is a critical enabler for driving business value. Recently we have used AI/ML to reduce the manual patent classification actions performed by an examiner; RPA to free up valuable time performing suspension checks on trademark applications, and virtual Data-as-a-Service (vDaaS) to increase the quality of applications in development through on-demand provisioning of test data. All of which has helped propel more, and more automation capabilities and is enabling our agency to deliver higher quality services to the public.

How do you identify which problem area(s) to start with for your automation and cognitive technology projects?

I am going to narrow this question and focus on RPA. When we first started our RPA program in 2019, we were looking for any USPTO process that could be used to demonstrate capabilities. This started with a “first-in-first-out” model where requests being submitted were only helping an individual or a low number of users. Since then, we have evolved our intake process to look more broadly at the automation request and find critical problem areas impacting USPTO business lines. A recent example was developing RPA solutions to help reduce the backlog created from the high volume of trademark applications submitted over the past twelve months.

How do you measure ROI for these sorts of automation, advanced AI, and analytics projects?

Measurements are always based upon the business value derived from the automations demonstrated capabilities. This can come in many different forms depending on the solution being implemented. For provisioning of cloud infrastructure, it can be something as simple as creating a routine that terminates idle virtual services when not in use, avoiding unnecessary expenses. For RPA, it can be looking at the number of productivity hours recouped from a single or multiple process instances automated. The key metric is always centered on asking ourselves “how does this help disseminate and issue timely and high-quality patents and trademarks?”

What are some of the unique opportunities the public sector has when it comes to data and AI?

In very general terms, the public sector is stewards and has access to vast amounts of very unique data that is equally inaccessible by any other entity in the world. This, of course, coming from the totality perspective and not from views available through open data platforms. There is immense potential in combining these unique data sets with AI to advance research into every single discipline known today. Quite simply it is boundless. The challenges, on the other hand, are all over the place and span legal, technical, and ethical boundaries. However, I’d like to point back towards our responsibilities as data stewards and ensuring public trust is being upheld. For me, this is the fundamental topic that should be addressed when determining how data should be used. Ultimately, the dilemma of how to use data and for what purposes related to AI have to be explicitly defined and vetted before pursuits are made to ensure we are exceeding the public’s expectations.

How do analytics, automation, and AI work together at the USPTO?

USPTO data is unique and with that, we have unique challenges and opportunities. The three areas are naturally woven together and build upon each other to enable advanced capabilities. Automations help feed our patent and trademark data lakes where preparations are made to address data quality and security. This in turn, mutually feeds our AI/ML models and eventually gets rolled out, and provides data insights and visualizations to broader groups. All of this helps create a sustainable environment for conducting data-driven decisions for the agency and ensuring USPTO can continually provide high-quality services.

What are you doing to develop an AI-ready workforce?

Workforce development within advanced technologies is already a challenge for many federal agencies. At USPTO we are fortunate to have strong leadership within the data science, analytics, and AI space from Scott Beliveau and our new emerging technologies director, Jerry Ma. With support from their teams, they are forging a new path for other USPTO personnel to follow by creating opportunities and allowing innovation to be explored. Enabling focused experimentation within AI that provides strong business value is one of the best tools we can leverage for developing our workforce. In the more practical sense, we have also been growing our workforce through traditional training and have had many employees participate in various levels of AI/ML and advanced analytics courses.

What AI technologies are you most looking forward to in the coming years?

I am really trying to keep an eye on how AI is evolving in the domain of cybersecurity research and development. There has already been a vast amount of work and success achieved in this area, to the point that any modern AV product is utilizing AI for static analysis and trending better with dynamic analysis. What I am most interested in is seeing how AI can “heal” vulnerable or compromise systems in real-time. Knowing how vulnerability research is traditionally conducted, there are ample opportunities to utilize AI to prevent the viability of a bug from being exploited. Recognizing and disseminating AI-driven patching actions before a compromise occurs is what I hope matures in the coming years.

The Source