In the past few years, the adoption of REST APIs has increased rapidly. In fact, REST has become the leading standard for building web APIs. Most web-based applications and mobile applications today are backed by REST APIs. But even top talent has faced some usability challenges, especially regarding data. Handling data is not as simple as it sounds, after all. Even top talent struggles to overcome challenges in retrieving data from REST APIs.

Types Of Responses

A challenge many Java coders face is that they do not know the user of their API. (And tougher), how this person will use the API. The API could be called over a LAN connection by an internal user. He/she might be making huge API responses or may be several API calls to get all the different chunks of data needed.

Potentially, it could also be external users connected over the internet. They would be concerned about the size of returning data (vis-a-vis internet charges).

There is a possibility of both cases. What makes it worse? As a software developer, you don’t get to decide whether your API should be:

  • chatty’ (sending small chunks of data) or
  • chunky’ (sending massive chunks of data).

Consumers usually demand both, depending on the situation.

Data Fetching

Modern web apps and native apps are increasingly data-driven. This often requires them to fetch and combine data from multiple locations in a huge dataset. Typically, within a very short time. A common bottleneck in RESTful APIs is the need for multiple roundtrips to multiple endpoints to fetch all data needed. Fetching data in REST APIs usually means hitting ever-increasing endpoints. Further, these multiple endpoints grow very quickly when you scale the application.

Over-fetching Or Under-fetching

Often enough, additional data is fetched along with the required data. For instance, to get the name of a person from the database, the API may send back all other fields of information too.  This is an example of over-fetching.

So if there are one hundred other fields storing data for this person, the server will return all hundred fields each time this endpoint is hit.

These fields are not even required by the client. So this redundant data is both useless and time-consuming.

The same can be said for under-fetching, albeit the other way. All required data is not fetched in a single call. And it needs another call to fetch the remaining data.

Network Request

As mentioned above, the existing method will typically fetch more or less data than actually needed.

Imagine the waste generated when scaling REST APIs! Every Java developer tolerates this waste. Primarily because each endpoint has a fixed data structure. In other words: the waste cannot be avoided.

The user needs to make additional (read unnecessary) requests to transfer a significant amount of resources. These could have been transferred in fewer requests.

This practice wastes bandwidth, clearly. The more requests, the more bandwidth wasted. It will be noticeably slower for larger datasets and latency will be high.

Static APIs

As REST APIs are static, data is stored and retrieved in a certain way. Making changes in the API later is challenging and needs time and effort. To retrieve new data from the API, you need a new endpoint, which you can just add.

Assume you want new data from an old API. (E.g. additional fields that weren’t included before) for a post with defined endpoints. You will have to make a whole new version of the API for that.

Every solution Java developers receive for these problems boils down to two things:

  • Designing the API as closely as possible to the requirement(s) of clients.
  • Versioning or updating the API when the data changes.

Modern APIs need changes, and lots of them. So you could, of course, design your API such that it exposes the data in a more efficient way.  But this is not a final solution, as requirements can be inaccurate. Versioning, as top talent knows, is a complicated task as well.

Solving Data Retrieval Challenges With GraphQL

GraphQL solves (or at least minimizes) these problems.

What is GraphQL?

GraphQL is a Facebook open source project. It started as an idea by some elite software engineers a few years ago. Today it has become a well-defined specification.

It is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API. It gives clients the liberty to ask for exactly what they need.

It also makes it easier to evolve or update APIs over time. With this, it enables powerful developer tools.

If a user only wants “first” name, he/she can specify this single attribute in the query. The server will return only the specified attribute.

GraphQL supports Hash API Method as well. The performance of the Hash API Method using GraphQL is relatively faster. If we have our own implementation of Hash API Method, we can have total control. That’s right: Any alteration, without the need for a whole new version.

GraphQL is all about giving control to the consumers of your API.

Along with limited queries, many other actions can also be performed by using GraphQL, such as:

  • Specifying and applying additional filters on attributes like first 10, last 10, etc.
  • Queries can be validated and parameterized using variables.
  • Dynamically changing queries using directives.

So what developers need is an API that:

  • Fully supports GraphQL.
  • Gives its consumers total control over what data they fetch and at what point.

Conclusion

GraphQl is one of many solutions. Top software developers are working on REST APIs to make them more convenient. Especially for Java coders. There are many other frameworks, skills, and techniques to overcome data retrieval challenges.

Author

Shaharyar Lalani is a developer with a strong interest in business analysis, project management, and UX design. He writes and teaches extensively on themes current in the world of web and app development, especially in Java technology.

Write A Comment