**Introduction**

The first thing that comes to anyone’s mind when they read the words “Inverse Kinematics” is probably nothing, at least that was once the case for me. In January 2020, at the start of the year of doom, I found myself in a new environment of a project team, SEDS VIT Projects. I was recruited as part of the CS team. I couldn’t help but wonder as to, in what all possible ways this particular domain would be useful in making a Rover work, and that excitement was overwhelming. Soon enough, I was assigned my task, which revolved around a very critical part of the rover, the robotic arm. The two words which this article is about, this was the first time I had heard them.

**What is Inverse Kinematics?**

Now, before you should know what inverse kinematics is, you must know, what forward kinematics is.

Imagine you’re chilling on your couch and suddenly, you feel thirsty; so you want to pick up the bottle of water sitting on the table next to the couch. You will just move your hand in a way that your fingers grab the bottle and then pick it up. The rest of your arm, including your elbow and shoulder, will move according to that. This natural movement of the arm is what everyone applies individually, but when it comes to a robotic arm, implementing this is much more difficult.

Now, imagine a world in which we move our arm based on the principles of forward kinematics. Imagine the same scenario again, but this time you can’t move your entire arm at once. This time, you have to proceed “forward”, to say literally, starting from your shoulder. You are allowed to move/ rotate your shoulder while keeping the rest of the arm fixed until the arm is in line with the bottle of water. Now, going “forward”, we move the elbow while keeping the rest of the arm fixed so that it is in line with the bottle of water. Finally, we move our wrist by either bending or rotating and finally grab the bottle of water. Comparing the time the two techniques took after experimentation, forward kinematics took 3 seconds while inverse kinematics took a single second.

For a better understanding, you may see the 2 GIFs below.

So far, we have established one thing- Inverse Kinematics is efficient.

Now let’s drop layman for a while and talk robotics. Imagine a three-segment robotic arm; segment 1 being the shoulder till the elbow, 2nd segment is the elbow till the wrist, and segment 3, the wrist till the end effector/ gripper. Forward kinematics means predicting the coordinates of the gripper based on the input, which is the joint angles, the length of each segment, and the coordinate of each joint. The otherwise referred to as ball and socket and synovial joints are motors in this case. When it comes to inverse kinematics, we do the inverse of this. We take the coordinates of the gripper and accordingly calculate the joint angles, as we discussed in the functioning of a human arm, in the example above. So, how do we do it? How do we teach our robotic arm, the mechanics, which we as infants took quite a while to pick up? The right answer lies in geometry, in the form of trigonometry. Given below is the mathematical interpretation of what we’ve discussed above with some pictorial justification.

For a better understanding of what a robotic arm looks like, use the image below for reference:

**Mathematical Justification**

In figure 1, consider (x, y) as the coordinates of the object you want to grab. The edge of the base of your robotic arm is from where we start considering our workspace. Δy represents the distance from the base of your arm to the edge of the base. The ∠θ’ is the angle required to rotate first so that the arm is in line with the object. It is known as the rotatory function of the ball and socket joint.

In our system, the lengths of the segments of the arm must be equal. Since they are equal, the isosceles triangle rule suggests that the base angles be equal as well. Inferring from these two premises, we come to the mathematical conclusion which helps us compute the angles at which the segments should be, with each other.

Now you might be wondering how are we going to determine the coordinate system. The answer is OpenCV. You might use a camera to capture an image and take the image as an input for your code, and the output will be your image in a 2-D coordinate system. A feasible alternative is Kinect, where you can get the idea about the depth as well and accordingly create a 3-D coordinate system to upscale your project further.

**Where we use it**

We use it in the robotic arm of our rover. The base here becomes the base of your rover, and a turntable performs the rotatory function. Using Inverse Kinematics, we perform dexterous tasks such as picking up an object, rotating a knob, turning a switch on/off and typing on a keyboard as well. The interfacing between fetching values from IoT sensors, computing, and passing instructions to rotate the motor, is made possible through ROS, which is where CS comes into play.

Inverse Kinematics is applicable in various fields involving robotics. You can perform a lot of DIY projects with your ideas, ranging from something as simple as a robotic arm typewriter to something much more complex like a robotic arm that feeds you. Different use cases will require various levels of implementation of what we discussed.