You are viewing our old blog site. For latest posts, please visit us at the new space. Follow our publication there to stay updated with tech articles, tutorials, events & more.

ATS @ naukri – Solution and Architecture

0.00 avg. rating (0% score) - 0 votes
This blog is in continuation to the part1 of the series – In part1, we discussed, what is an ATS and what challenges are faced in sharing responses across ATSs. Here we present the solution we have built to tackle this problem


We have built a generic, robust, scalable and efficient system, wherein we can support a varied number of connection methods following different protocols, needing different inputs in different formats at different time intervals. This system is still in its early stages. If a client comes with requirements which we never encountered or never thought of, we sometimes need to refactor what we have already built. So this system is ever improving. Let us discuss, how we did this. To make sure, we don’t need to code each time a similar requirement pops up, we devised a solution which is composed of following components:

  1. Connection Agent: As, there can be a varied number of connection methods, we maintain a list of clients, who needs which connection method, e.g. ftp, sftp, email, api, anything. This agent knows it all. How the connection is to be made, what needs to be passed to in headers, the authentication mechanism, connection method, input format, ip, port, anything you can think of, that is related to create a connection with the client.
  2. Template Builder: As each client needs a different input, different number of input parameters, different labels, we have built a template builder which builds different templates for different requirements. The templates are built in a format understood by our parsers, so that we can fill these templates with data.
How do we build generic templates? – We create postfix expressions for each value of input. These expressions are then evaluated by our parsers, and each expression is replaced by its corresponding value.
This is a sample input. The template is in JSON format. Similarly, it can be in any format, XML or CSV. As you can clearly see, the values looks very weird. What are these values? Well, these are our postfix expressions.
Types of expressions: We have 2 types of expressions:
  1. Functions: e.g. *getName
  2. Operations: e.g. #splitWord|||*getName; ;0
Each function maps to a function in our model. Operations transform the values given to them.
This is how, splitword operation works:
But wait, why do we need to have operations? Why can’t be there a simpler way?
Let us go back to the problem where we discussed the example of name. Some clients needs Sachin in first name, some may need Sachin Ramesh. There are many such cases for many different fields also. So, we built a system, which is generic and scalable. You just need to play with the operations on data, no need to touch the code. We only touch our code when some new, unseen requirement pops up.
The nesting of operations can go to any level, e.g.
This is a postfix expression. The innermost expression will be evaluated first. In the end, trim function would receive two arguments, the character to be trimmed and the string to be operated on.
  1. Input ETL and Mapping Engine – This is the most important and most complex component of the system. It populates data in the templates generated by template builder. Postfix expressions are evaluated here. If the company needs some specific mapping, it is also done here.
  2. Application Queuer: Once we have rendered the template, we finally queue it up, so that it can be picked in the next available slot.
  3. Push Jet: This is push service, which is responsible for actual data sending. It checks when do we need to send data to a client. It then uses response service to know what does the response returned by the client mean. If it was success, it is dequeued, else it is enqueued again upto 5 times, so that there is no loss of data.

Architecture of this whole flow:

For those, looking at this diagram and wondering, what is IMS? IMS is Intercept Management System, which lets us show various layers of profile completion or questionnaires or any other layer to jobseekers. If the jobseeker’s profile is not complete, or the recruiter has asked for additional information, to be asked some specific questions to be filled at the time of applying to a job, layers are shown to capture this data. These layers are served by IMS.
What’s next?
Till now, we only have support to push data. Soon, we will be releasing a pull api, so that clients can pull applications directly from naukri as and when needed, according to their convenience.
*Note* Please contact for any integration related queries.
Posted in Web Technology

2 thoughts on “ATS @ naukri – Solution and Architecture

Comments are closed.