Extended Euclidean Algorithm
Learn via video course
Overview
The Extended Euclidean Algorithm builds on top of the basic euclidean algorithm and helps us in solving certain linear diophantine equations as well as finding the modular multiplicative inverse, in addition to calculating the greatest common divisor.
Takeaways
- Complexity of Extended Euclidean Algorithm
- Time complexity - O(log(min(a, b)))
Introduction to Extended Euclidean Algorithm
Imagine you encounter an equation like,
and you are asked to solve for x and y. How would you do it? 🤔
Not really!
You see if I provide you one more relation along the lines of ‘c is divisible by the greatest common divisor of a and b’. Then the above equation can be solved using the Extended Euclidean algorithm.
Now, before we dive into the extended edition, here's a no-brainer --- I would highly recommend reading about the basic Euclidean algorithm first. (if you haven't already)
Because this article assumes you already have a fundamental understanding of the basic algorithm, its intuition, and time complexity analysis.
Moreover, we'll only talk about the Extended Euclidean algorithm here.
So, without further ado, let's explore what extension the Euclidean algorithm lends itself to for solving such equations? 😇
The Extended Euclidean Algorithm
This extended version builds on top of the basic Euclidean algorithm and is able to solve linear diophantine equations of the form:
where GCD refers to the Greatest Common Divisor.
Later on, we'll see how we can use the above equation and generalize it for covering any equation of the form ax + by = c, where c % GCD(a, b) == 0.
But for now, let's build intuition first!
How does the Extended Euclidean Algorithm Work?
Since the extended algorithm attempts to solve for the values of x and y along with the gcd. So, by "how does it work?" --- we mostly mean how do we find the values of x and y!
To answer it… ✍🏻
Let's consider the equation, ax + by = g where g is GCD(a, b). As we are already familiar with the fact that the basic Euclidean algorithm terminates with the base case; a = g and b = 0
So, by choosing x and y as 1 and 0 respectively, we confirm that our equation is balanced, like so;
and we ought to maintain this balanced form throughout the recursion stack of our algorithm. 💪🏻
Say, the state of the algorithm during the base case is represented as;
Now, what would the previous state look like in the recursion stack? Since we don't know the answer to that question let's assume the state to be;
where we know,
That is, we can specify the relation of a and a', b and b' going up the recursion stack.
Now, in order to bring it all together, we rely on the fact that the one thing that remains common in between recursive calls is the greatest common divisor.
So, we can relate subsequent calls like so;
Notice, we care about finding x' and y' in terms of x and y. Similarly, in order to determine the next x'' and y'', we would leverage x' and y' i.e. going down the recursion stack as opposed to going up!
I know, I know.... 😣
Don’t worry, this usually makes sense on the second or third read!
Still, let's unpack all this gibberish by considering the following example;
As you can see, the base case always stands at x = 1, y = 0 for the equation to result in g = 1.
Now, while going down the recursion stack, every level recomputes the values of x and y according to the changing a and b, for the equation to result in g = 1.
Doing so, we get the final values as x = -8 and y = 5 for the coefficients a = 13 and b = 21.
We can verify the result by substituting x and y back in the equation, like so;
LHS = RHS
But how did we come up with that recomputation logic for ‘x’ and ‘y’? 🧐
That's precisely what we are about to explore next! 🤗
Remember these relations?
Hence substituting the value of a and b in the following equation;
we get,
Rewriting a' % b' as a' - b' * (a' // b'), we have
Rearranging the terms, we get
Comparing coefficients a' and b' on both sides, we express x' and y' in terms of the previous state (a, b, x, y), like so;
Similarly, for subsequent recursion states, we perform this recomputation for x and y to account for the changing a and b in order to balance the equation and to always result ing.
Since we know the base case to be (g, 0, 1, 0), we can easily deduce the previous state and its previous state and on and on up until the original equation!
In our case that would be, 13x + 21y = 1
In the end, the final x and y would be the required answer! 🥳
That’s it!
Enough with the theory... 🙅🏻
Now, that we are familiar with the crux of the Extended Euclidean algorithm, let’s see it in action! 😎
Recursive Implementation of the Extended Euclidean Algorithm
C++
Java
Python
Time complexity of Extended Euclidean Algorithm
We have already seen that the time complexity of the basic Euclidean algorithm is O(log(min(a, b))).
Also, the two additional steps for recomputing subsequent x and y amounts to O(1) if we assume floor division to be a constant operation. So, it stands to reason that the complexity remains the same as before.
In other words, the time complexity of the Extended Euclidean algorithm is also O(log(min(a, b))).
This makes the algorithm really efficient in certain use cases… 🔍
How is the Extended Euclidean Algorithm Useful?
The Extended Euclidean algorithm can be used to solve:
- Linear diophantine equations of the form ax + by = c iff c is divisible by g
where,
As you can see, a valid integer-based solution is possible only when c is divisible by g.
- The modular multiplicative inverse of ‘a’ with respect to ‘m’ iff ‘a’ and ‘m’ are co-prime
Hence, x is the modular multiplicative inverse, and we can solve for it by applying the same principles of the extended algorithm.
Conclusion
We have addressed the what, how, and why of the Extended Euclidean algorithm. We discussed the natural intuition that follows from the establishment of the Basic Euclidean algorithm.
To wrap it all up, let me recount you the TLDR ;
- The Extended Euclidean algorithm builds on top of the basic Euclidean algorithm.
- It can solve linear diophantine equations of the form: ax + by = c, where c is divisible by the greatest common divisor of a and b.
- The time complexity O(log(min(a, b))) is the same as that of the basic algorithm.
Now, that you have an understanding of the theory behind the Extended Euclidean algorithm...
Go break some test cases, and Euclid would be so proud of you!
😇