Inverse kinematics
Inverse kinematics is the function that gives us the joint angles needed to achieve a particular end-effector pose.
This is described:
To explain how we can do this, we will look at the inverse kinematics of a simple 2-joint robot arm for starters.

The objective is to calculate the joint angles and
needed to achieve the end-effector pose
It should be noted that either or
can be defined in terms of the other.
We start by defining r, simply using Pythagoras i.e.
we then need to use the cosine rule to determine the angle alpha, between the two links.
cosine rule:

Where, b and c are represented by the link lengths and
, respectively, and length a is r calculated above. From this we can compute an expression for angle
:
which we can see from Figure 1 is also related to angle by
. Therefore, considering that
, we can compute the joint angle to be:
Taking a second triangle about the second link and joint angle , we can compute the sides of the triangle in terms of the joint angle
and link length
:

We can then use these lengths to find the angle with the relationship:
Given the sum of and joint angle
relate to the triangle of base x and height y, shown below, we can see that the angle Beta relates to the joint angle
by:

We can then substitute the expression for to compute the equation of
in terms of the link lengths and second joint angle
:
Looking at our expression for we can see that our inverse kinematics give us two possible solutions for the joint angle, positive or negative.
Either one will give us the desired end-effector pose, as the expression for joint angle is itself a function of
. However, they correspond to different joint angle configurations.
The method covered above is a geometry based approach. However, we can solve the inverse kinematics taking a more analytic approach, using algebra.
For the same 2-joint robot arm we discussed in the previous example, shown in Figure 1, we have an end-effector pose that can be described by the forward kinematics expression:
This is a sequence of two translations and two rotations.
Multiplying out and expanding this expression, gives a homogeneous transformation matrix of the form:
In this case, what we are most interested in is the position of the end-effector defined by the x and y coordinates, which are described by the 3rd column of the matrix above.
We can add the square of the expressions defining the x and y coordinate of the robot end-effector, which resolves to:
we can rearrange the equation to isolate the term and give us an expression for the required joint angle in terms of the end-effector coordinates:
This is the same solution computed using the geometry based approach.
Now to find the solution to . We can take simplify the x and y coordinate expressions substituting short-hand notations:
This gives us:
These expressions are of the form:
which has the solution:
Therefore, by substituting the variables of the expression y into the identity above such that ,
and
we can solve for
:
We can see this mirrors the solution obtained from the geometry based approach.
The inverse tan function:
sometimes also denoted atan or arctan suffers from a limitation.
The standard inverse tan function is incapable of recognizing an angle passing into the (LHS) or imaginary plane. this is because the standard tan function only has range and therefore cannot account for a rotation in the LHS or a negative sign. The inverse tan function atan is used to get around this issue, which has a range
and accounts for the sign of the coordinates input.
The function atan2 is found in many mathematical libraries, including MATLAB, where it considers the sign of both coordinates to evaluate the angle:
In MATLAB, for a modular robot R, we can use the command ikine in the Robotics Toolbox to compute the joint angles required to achieve the specified end-effector pose described by a transform matrix denoted T, as follows:
The R is an arbitrary notation I have assigned for a serial link robot, but this needs to be defined in the MATLAB library. To simply test the command for the time being, you could use the mdl_hyper3d() or mdl_puma560 which are already have defined libraries in the Robotics Toolbox. In which case the for the hyper3d, the inverse kinematics can be computed using the command:
Where there are more than one set of joint configurations which achieve the desired end-effector pose, as shown above with the positive or negative values of , the ikine command allows us to specify the desired configuration: ‘r’ for right and ‘l’ for left.
We have covered the analytical method of computing the joint angles using inverse kinematics. However, it can be more convenient and practical to use what is called the Numerical method, especially when the robot has a large number of joints, i.e. above 6.
The advantage of using the analytical approach is that it explicitly shows the multiple configurations that the robot arm can be in for a particular end-effector pose. When working with a robot with a small number of joints, the analytical approach can be the best option as it provides a compact solution which is faster to execute than the numerical alternative.
However, it may be more mathematically difficult or even impossible to find the inverse kinematics of a more complex robot. In this case the Numerical method should be used.
We have seen that we can always find the forward kinematics of the robot. To achieve a desired pose we need only adjust the joint angles so as to move our end-effector from the starting pose to the final pose we want. This can be presented as an optimization problem defined:
Using MATLAB, we use the ikine command mentioned above. This is fairly simple, and requires us to only specify the final end-effector pose and the initial joint angles, as shown by the example below for a puma560 robot:
One shortcoming of the Numerical method, is that it can be difficult to obtain all the possible joint angle configurations which can achieve a particular end-effector pose, as MATLAB will only return the one possible configuration based on the initial joint angles specified.
In addition, if the initial joint coordinates are too far from the joint coordinates needed to achieve the desired pose, than the solution may not converge.
Note: When trying to compute the inverse kinematics of a redundant robot we need to use the numerical solution. But there will be a near infinite number of joint configurations that result in the same end-effector pose.
Singularity is an problem in robot control similar to Gimbal Lock, which rises when two or more joint axis are aligned.
The alignment of the two joints results in a loss of a degree of freedom as the motion of either joint simply creates the same motion in the end-effector. It, therefore, becomes impossible for the robot end-effector to achieve certain orientations while these joints are aligned, that were otherwise possible when the joints where not aligned.
Due to this singularity, the robot will be unable to adopt certain orientations that were otherwise possible when the joints where not aligned. We will see this problem pop up in several robotics problems.
Moving a robot from one pose to another, we need to determine the joint angles at each pose, by carrying out the inverse kinematics of the poses.
To move smoothly between the two poses, or set of joint angles, we need to interpolate the vector quantities overtime.
We can simulate the motion between two end-effector poses, within the robot workspace, by creating a trajectory between the set of joint angles, and specifying the number of time steps in which to perform the motion. We can use the command jtraj:
In this example I have asked MATLAB to perform the notion over 50 time steps, between a set of joint angles q0 and q1, which will return a matrix of 50 rows and 6 columns (for 6 joint robot in 3D such as the puma560), where each row corresponds to a time step and the columns correspond to a joint angle.
We can then visualize this using the plot command with the robot we are considering i.e. for a pum560:
We can then use the qplot(Tr) command to plot a figure of the change in joint angles over time. This should show that the Cartesian position of the end-effector moves smoothly as a function of time, as we would want.

However, if we plot the poll-pitch-yaw angles against the time steps. We can see from this that not all the angles will remain at zero constant. This is because we have only constrained the orientation at the beginning and end of the trajectory, at poses A and B.
This indicates that the end-effector did not follow the desired straight path and that its orientation has deviated along the trajectory.
We can use Cartesian Interpolation to resolve this.
As opposed to interpolating the joint angles, in cartesian interpolation we will directly interpolate the Cartesian pose and the perform the inverse kinematics to find the joint angles at every individual time step between the two specified poses.
where s varies smoothly from 0 to 1.
So in Matlab we will use the command ctraj for the cartesian trajectory from an initial and final pose and pass in the number of time steps, e.g.:
This will give a series of 50 poses, each of which are 4×4 matrices, one matrix for every time step.
We can the pass the cartesian trajectory defined (Tc) into the inverse kinematics of the robot (puma560):
This will return a 50 sets of joint angles, one set for every time step.