# Potential Method in Amortized Analysis

**What is Amortization?**

Consider that you are a company owner and that you require a vehicle. The automobile is priced at €10, 000. You’ll have to spend €10, 000 if you decide to purchase it this year. But you want to keep driving the automobile for the next ten years. An additional €1, 000 is needed to operate the automobile for a year.

There are two different angles from which to view the aforementioned circumstance. The first method is the one we employed above: there is a year with a high level of spending followed by nine years with lower levels.

Similarly, in the case of data structure, all of an algorithm’s steps or all of a method’s executions in a data structure typically take the same amount of time. In these situations, determining a suitable upper bound on the time complexity is simple: multiply the number of steps by the longest possible time for a single step.

So basically in upper cases, we can have cases when the price of cars can go up and down. but if we calculate the overall cost it will be the average one. Similarly in data structures, some steps can be costlier but overall steps are minimized.

**Amortized Time Complexity**

Amortized complexity is the total expense per operation, evaluated over a sequence of operations or we can also say average time taken per operation if you do many operations.

we can understand it with an example of **ArrayList. **

When an** ArrayList** hits its maximum capacity, then it will create an array of double the size of old array and copy all the old array elements into a new array.

So, it means if the size of the array is less than the capacity, we only need to add the elements which will take** O(1**) time complexity but if in case we have to insert at the time when its size is equal to capacity, we will have to create a new array which will take **O(2*ArrayList.size)** time and** O(n)** for copying old data, then insertion.

**Total Amortized time complexity**

But let’s be accurate for the worst time complexity without simplifying this time.

- If the internal array capacity starts with 1, then the capacities will be 1, 2, 4, 8, 16… X when it hits the capacity and gets doubled.
- It takes 1, 2, 4, 8 16 … X items to copy into the new array depending on the capacity that has been reached.
- So the time complexity will be 1 + 2 + 4 + 8 + 16 +… X. If you think from X, this will be
**X + X/2 + X/4 + X/8… + 1 = 2X**approximately.

We try to show that even though some of the operations may be expensive, the sum (or, equivalently, the average) of their running times always has to be small.

We can understand it with the help of a diagram which contains an explanation about it with an example:

**What is the Potential method?**

According to computational complexity theory, the potential method is defined as:

A method implemented to analyze the amortized time and space complexity of a data structure, a measure of its performance over sequences of operations that eliminates the cost of infrequent but expensive operations.

The potential approach focuses on how the current potential, may be calculated directly from the algorithm’s or data structure’s present state.

The potential technique chooses a function **Φ** that changes the data structure’s states into non-negative values. **S** represents work that has been accounted for in the amortized analysis but has not yet been completed if **S** is taken to reflect the state of the data structure. Consequently, it is possible to think of **Φ(S)** as computing the quantity of potential energy that the state has to provide. The potential value is set to zero before a data structure is initialized. Alternately, it is possible to think of **Φ(S)** as denoting the degree of disorder or the separation of state **S** from the ideal state.

We can designate a potential function **Φ** (read “**Phi**“) on a data structure’s states if it meets the criteria listed below.

**Φ(a**, where_{0}) = 0**a**_{0}_{ }is the starting state of the data structure.**Φ(a**0 for all states_{t}) ≥**a**of the data structure occurring at the time of the course of the computation._{t}

At each stage in the computation, the potential function should be able to maintain track of the precharged time. It calculates the amount of time that can be saved up to cover expensive operations. Intriguingly, though, it simply depends on the data structure’s current state, regardless of the history of the computation that led to that state.

We then define the amortized time of an operation as

c + Φ(a’) − Φ(a), wherecis the original cost of the operation andaanda’are the states of the data structure before and after the operation, respectively.

As a result, the amortized time is calculated as the actual time plus the prospective change. The amortized time of each operation should ideally be low when defined.

## Analysis of potential method with example

__Stack operations__**Push operation:** Time complexity to push an item into a stack is O(1)

**Pop operation:** Time complexity to pop an item from a stack is O(1)

**Multipop operation:** Time complexity to pop **k** item from a stack is **min(stack size, k)**.

**Algorithm:**

Multipop (S, k)

While not Stack_Empty (S) and k ! = 0

do Pop (S)

k=k – 1

Let us define the potential of a stack as the number of elements in the stack.

So, **Φ(D _{0}) = 0** and

**Φ(D**.

_{i}) ≥ 0**Amortized cost of Push:** According to potential function.

Amortised cost(C_{i}) = Actual cost(c_{i} ) + change in potential

C_{i} = c_{i} +Φ(D_{i})-Φ(D_{i-1}) = 1+n-(n-1) = 2

So, C_{i} is also **O(1)**

**Amortized cost of Pop:** According to potential function:

Amortised cost(Ci) = Actual cost(ci ) + change in potential

Ci = ci +Φ(D_{i-1})-Φ(D_{i}) = 1+n-1-(n) = 0

So, C_{i }is also **O(1)**

**Amortized cost of MultiPop: **According to potential function:

Amortised cost(Ci) = Actual cost(ci ) + change in potential

Ci = ci +Φ(D_{i})-Φ(D_{i-1}) = k + s – k – s =0

So, C_{i }is also **O(1)**

Now, we have the final result as shown below in the table

Operation |
Amortized cost as per |
---|---|

Push |
O(1) |

Pop |
O(1) |

Multipop |
O(1) |

So, the amortized cost of each operation is **O(1) **and the total cost of each operation is **O(N)**.

So time complexity of N operations is **O(N).**

Therefore, it is appropriate to quantify the change in potential as **positive for low-cost operations** and **negative for high-cost operations**.

## Please

Loginto comment...