Skip to content

Latest commit

 

History

History
 
 

1611. Minimum One Bit Operations to Make Integers Zero

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 

Given an integer n, you must transform it into 0 using the following operations any number of times:

  • Change the rightmost (0th) bit in the binary representation of n.
  • Change the ith bit in the binary representation of n if the (i-1)th bit is set to 1 and the (i-2)th through 0th bits are set to 0.

Return the minimum number of operations to transform n into 0.

 

Example 1:

Input: n = 0
Output: 0

Example 2:

Input: n = 3
Output: 2
Explanation: The binary representation of 3 is "11".
"11" -> "01" with the 2nd operation since the 0th bit is 1.
"01" -> "00" with the 1st operation.

Example 3:

Input: n = 6
Output: 4
Explanation: The binary representation of 6 is "110".
"110" -> "010" with the 2nd operation since the 1st bit is 1 and 0th through 0th bits are 0.
"010" -> "011" with the 1st operation.
"011" -> "001" with the 2nd operation since the 0th bit is 1.
"001" -> "000" with the 1st operation.

Example 4:

Input: n = 9
Output: 14

Example 5:

Input: n = 333
Output: 393

 

Constraints:

  • 0 <= n <= 109

Related Topics:
Dynamic Programming, Bit Manipulation

Solution 1.

Operation 1 is n ^ 1. Operation 2 is n ^ (lowbit(n) << 1) where lowbit(n) = n & -n, i.e. the right most bit 1.

n f(n)
0 0
1 1
2 3
3 2
4 7
5 6
6 4
7 5
8 15
9 14
10 12
11 13
12 8
13 9
14 11
15 10
16 31
17 30
18 28
19 29
20 24

Observations:

Observation 1: Since we are looking for the shortest path, BFS is the first option. But since n <= 1e9 and the steps to get to 0 might be greater than n, BFS will get TLE.

Observation 2: We must do operation 1 and 2 alternatively because redoing the same operation is just wasting time.

Observation 3: Based on observation 2, we must start with operation 1 or operation 2 and keep doing the operations alternatively to get to 0.

Then how do we determine which operation to start with? I found it depends on the number of bit 1s in n.

If there are odd number of bit 1s in n, we start with operation 1; otherwise we start with operation 2.

For example:

n = 2
10 -- operation 1
11 -- operation 2
01 -- operation 1
00

And as you can see above, for n = 3, we start with operation 2.

This is true because both operation will change the parity of the number of bit 1s in n. So if we start with operation 1 for n with odd bit 1s, then we must start with operation 2 for operation1(n) which has even bit 1s. Since n = 1 starts with operation 1, so the observation above is correct.

With this observation, we can first check the parity of bit 1s in n, then apply operation 1 and 2 alternatively. But this will get TLE because it's still O(N) time complexity.

Observation 4: If n = 2^k, then f(n) = 2^(k + 1) - 1. For example, f(8) = 15.

In order to get from n = 2^k to 2^(k-1), we need to visit all the binary numbers under 2^k. For example, to get from n = 4 to 2, we need to do:

100 -- 1
101 -- 2
111 -- 3
110 -- 4
010

So we have f(2^k) = 2^k + f(2^(k-1)). And since f(1) = 1, we can get that f(2^k) = 2^(k + 1) - 1.

Observation 5:

Our goal is always to remove the highest bit 1 first (let's denote it as highestBit). To remove the highest bit 1, we must turn the bit after the highest bit 1 to be 1 (let's denote it as secondHighestBit).

So if secondHighestBit is 1, i.e. n = 11xxx...xxx, then the we can do the following:

11xxx...xxx --- 1
11000...000 --- 2
01000...000 --- 3
00000...000

The steps required for process 2 and 3 are known. The steps required for process 1 is exactly f(xxx...xxx) where xxx...xxx are the bits after the second highest bit.

As you can see, it's a recursive process.

Now consider n = 10xxx...xxx, then we need to first turn it into 11000...000.

How many steps are required to get from 10xxx...xxx to 11000...000?

(When we ignore the leading 1) It's the same as the steps required from 0xxx...xxx to 1000...000, or the other way around from 1000...000 to 0xxx...xxx.

Since we know f(1000...000), and once we know f(0xxx...xxx), we can get the steps required from 10xxx...xxx to 11000...000, which is f(1000...000) - f(0xxx...xxx).

Again, this is a recursive problem.

Let's summarize this observation.

Let

$$ n = 2^k \cdot x_k + 2^{k-1} \cdot x_{k-1} + 2^{k-2} \cdot x_{k-2} + ... + 2^0 \cdot x_0 $$

where $x_k$ is the leftmost bit 1.

Note that

$$f(2^k + 2^{k-1}) = f(2^{k-1}) + 1 = 2^k$$

If $x_{k-1}$ is 1, then

$$ f(n) = f(2^{k-2} \cdot x_{k-2} + ... + 2^0 \cdot x_0) + f(2^k + 2^{k-1}) $$

$$ f(n) = f(2^{k-2} \cdot x_{k-2} + ... + 2^0 \cdot x_0) + 2^k $$

If $x_{k-1}$ is 0, then

$$f(n) = f(2^k + 2^{k-1}) + f(2^{k-1}) - f(2^{k-2} \cdot x_{k-2} + ... + 2^0 \cdot x_0)$$

$$f(n) = 2^{k+1}-1 - f(2^{k-2} \cdot x_{k-2} + ... + 2^0 \cdot x_0)$$

// OJ: https://leetcode.com/problems/minimum-one-bit-operations-to-make-integers-zero/
// Author: github.com/lzl124631x
// Time: O(logN)
// Space: O(logN)
class Solution {
    inline unsigned lowbit(unsigned x) { return x & -x; }
public:
    int minimumOneBitOperations(int n) {
        if (n <= 1) return n;
        unsigned first = ~0;
        while (first & n) first <<= 1;
        first = lowbit(first >> 1);
        unsigned second = (first >> 1) & n, rest = minimumOneBitOperations(n & ~first & ~second);
        return second ? first + rest : (first << 1) - 1 - rest;
    }
};

Solution 2. Gray Code

n is actually the f(n)-th Gray Code. We can use the following code to convert the gray code n to its corresponding binary number f(n).

See more about Gray Code here

// OJ: https://leetcode.com/problems/minimum-one-bit-operations-to-make-integers-zero/
// Author: github.com/lzl124631x
// Time: O(logN)
// Space: O(1)
class Solution {
public:
    int minimumOneBitOperations(int n) {
        int ans = 0;
        while (n) {
            ans ^= n;
            n >>= 1;
        }
        return ans;
    }
};

Or

// OJ: https://leetcode.com/problems/minimum-one-bit-operations-to-make-integers-zero/
// Author: github.com/lzl124631x
// Time: O(logN)
// Space: O(1)
class Solution {
public:
    int minimumOneBitOperations(int n) {
        n ^= n >> 16;
        n ^= n >> 8;
        n ^= n >> 4;
        n ^= n >> 2;
        n ^= n >> 1;
        return n;
    }
};