Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HBASE-26328 Clone snapshot doesn't load reference files into FILE SFT impl #3749

Merged
merged 41 commits into from
Oct 22, 2021
Merged
Show file tree
Hide file tree
Changes from 27 commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
470bd2e
HBASE-26064 Introduce a StoreFileTracker to abstract the store file t…
Apache9 Jul 29, 2021
8b26dcc
HBASE-25988 Store the store file list by a file (#3578)
Apache9 Aug 26, 2021
3b9936d
HBASE-26079 Use StoreFileTracker when splitting and merging (#3617)
wchevreuil Sep 8, 2021
1756584
HBASE-26224 Introduce a MigrationStoreFileTracker to support migratin…
Apache9 Sep 9, 2021
0ce891a
HBASE-26246 Persist the StoreFileTracker configurations to TableDescr…
wchevreuil Sep 12, 2021
989028b
HBASE-26248 Should find a suitable way to let users specify the store…
Apache9 Sep 14, 2021
6581fef
HBASE-26264 Add more checks to prevent misconfiguration on store file…
Apache9 Sep 15, 2021
ab03ee1
HBASE-26280 Use store file tracker when snapshoting (#3685)
Apache9 Sep 17, 2021
fe91ad4
HBASE-26064 Introduce a StoreFileTracker to abstract the store file t…
Apache9 Jul 29, 2021
dd7949f
HBASE-25988 Store the store file list by a file (#3578)
Apache9 Aug 26, 2021
cb35f18
HBASE-26079 Use StoreFileTracker when splitting and merging (#3617)
wchevreuil Sep 8, 2021
e6979ce
HBASE-26224 Introduce a MigrationStoreFileTracker to support migratin…
Apache9 Sep 9, 2021
6c712ec
HBASE-26246 Persist the StoreFileTracker configurations to TableDescr…
wchevreuil Sep 12, 2021
ae31799
HBASE-26248 Should find a suitable way to let users specify the store…
Apache9 Sep 14, 2021
150d995
HBASE-26264 Add more checks to prevent misconfiguration on store file…
Apache9 Sep 15, 2021
c4d7d28
HBASE-26280 Use store file tracker when snapshoting (#3685)
Apache9 Sep 17, 2021
1727115
HBASE-26326 CreateTableProcedure fails when FileBasedStoreFileTracker…
Oct 1, 2021
f02941a
Merge branch 'HBASE-26067' into HBASE-26326
wchevreuil Oct 7, 2021
27235c1
Changed method name
wchevreuil Oct 8, 2021
2e5a496
Addressing latest review commentts
wchevreuil Oct 8, 2021
7d285ff
additional comments
wchevreuil Oct 8, 2021
a120c93
addressing latest review suggestions
wchevreuil Oct 8, 2021
c8566cf
Fixing UT failure
wchevreuil Oct 8, 2021
7b03fd2
HBASE-26328 Clone snapshot doesn't load reference files into FILE SFT…
Oct 8, 2021
ba9600c
Additional fixes
Oct 13, 2021
9c9789b
HBASE-26326 CreateTableProcedure fails when FileBasedStoreFileTracker…
wchevreuil Oct 13, 2021
d1a91be
Merge branch 'HBASE-26067' into HBASE-26328
wchevreuil Oct 13, 2021
d75b0ce
HBASE-26064 Introduce a StoreFileTracker to abstract the store file t…
Apache9 Jul 29, 2021
cab6140
HBASE-25988 Store the store file list by a file (#3578)
Apache9 Aug 26, 2021
708b7c1
HBASE-26079 Use StoreFileTracker when splitting and merging (#3617)
wchevreuil Sep 8, 2021
41ed7e2
HBASE-26224 Introduce a MigrationStoreFileTracker to support migratin…
Apache9 Sep 9, 2021
f2addb1
HBASE-26246 Persist the StoreFileTracker configurations to TableDescr…
wchevreuil Sep 12, 2021
5c35744
HBASE-26248 Should find a suitable way to let users specify the store…
Apache9 Sep 14, 2021
767a3e1
HBASE-26264 Add more checks to prevent misconfiguration on store file…
Apache9 Sep 15, 2021
be5a148
HBASE-26280 Use store file tracker when snapshoting (#3685)
Apache9 Sep 17, 2021
d3556bf
HBASE-26326 CreateTableProcedure fails when FileBasedStoreFileTracker…
wchevreuil Oct 13, 2021
c53a80d
Merge branch 'HBASE-26067' into HBASE-26328
wchevreuil Oct 16, 2021
714e6a0
Fixing UTs, checkstyle and addressing review comments.
Oct 19, 2021
bbaf3e6
Merge branch 'HBASE-26067' into HBASE-26328
wchevreuil Oct 21, 2021
8e3400e
adapting this PR to the changes from HBASE-26386
Oct 21, 2021
1adaf42
Added/Fixed comments
Oct 22, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
syntax = "proto2";
// This file contains protocol buffers that are used for store file tracker.
package hbase.pb;

option java_package = "org.apache.hadoop.hbase.shaded.protobuf.generated";
option java_outer_classname = "StoreFileTrackerProtos";
option java_generic_services = true;
option java_generate_equals_and_hash = true;
option optimize_for = SPEED;

message StoreFileEntry {
required string name = 1;
required uint64 size = 2;
}

message StoreFileList {
required uint64 timestamp = 1;
repeated StoreFileEntry store_file = 2;
}
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
import org.apache.hadoop.hbase.client.RegionInfo;
import org.apache.hadoop.hbase.client.RegionInfoBuilder;
import org.apache.hadoop.hbase.mob.MobConstants;
import org.apache.hadoop.hbase.mob.MobUtils;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where do we use this class?

import org.apache.hadoop.hbase.regionserver.HRegion;
import org.apache.hadoop.hbase.regionserver.StoreFileInfo;
import org.apache.hadoop.hbase.util.CommonFSUtils;
Expand Down Expand Up @@ -202,7 +203,6 @@ public static boolean isHFileLink(final Path path) {
return isHFileLink(path.getName());
}


/**
* @param fileName File name to check.
* @return True if the path is a HFileLink.
Expand Down Expand Up @@ -326,7 +326,7 @@ public static String createHFileLinkName(final TableName tableName,
* @return true if the file is created, otherwise the file exists.
* @throws IOException on file or parent directory creation failure
*/
public static boolean create(final Configuration conf, final FileSystem fs,
public static String create(final Configuration conf, final FileSystem fs,
wchevreuil marked this conversation as resolved.
Show resolved Hide resolved
final Path dstFamilyPath, final RegionInfo hfileRegionInfo,
final String hfileName) throws IOException {
return create(conf, fs, dstFamilyPath, hfileRegionInfo, hfileName, true);
Expand All @@ -347,7 +347,7 @@ public static boolean create(final Configuration conf, final FileSystem fs,
* @return true if the file is created, otherwise the file exists.
* @throws IOException on file or parent directory creation failure
*/
public static boolean create(final Configuration conf, final FileSystem fs,
public static String create(final Configuration conf, final FileSystem fs,
wchevreuil marked this conversation as resolved.
Show resolved Hide resolved
final Path dstFamilyPath, final RegionInfo hfileRegionInfo,
final String hfileName, final boolean createBackRef) throws IOException {
TableName linkedTable = hfileRegionInfo.getTable();
Expand All @@ -370,7 +370,7 @@ public static boolean create(final Configuration conf, final FileSystem fs,
* @return true if the file is created, otherwise the file exists.
* @throws IOException on file or parent directory creation failure
*/
public static boolean create(final Configuration conf, final FileSystem fs,
public static String create(final Configuration conf, final FileSystem fs,
wchevreuil marked this conversation as resolved.
Show resolved Hide resolved
final Path dstFamilyPath, final TableName linkedTable, final String linkedRegion,
final String hfileName) throws IOException {
return create(conf, fs, dstFamilyPath, linkedTable, linkedRegion, hfileName, true);
Expand All @@ -392,7 +392,7 @@ public static boolean create(final Configuration conf, final FileSystem fs,
* @return true if the file is created, otherwise the file exists.
* @throws IOException on file or parent directory creation failure
*/
public static boolean create(final Configuration conf, final FileSystem fs,
public static String create(final Configuration conf, final FileSystem fs,
wchevreuil marked this conversation as resolved.
Show resolved Hide resolved
final Path dstFamilyPath, final TableName linkedTable, final String linkedRegion,
final String hfileName, final boolean createBackRef) throws IOException {
String familyName = dstFamilyPath.getName();
Expand Down Expand Up @@ -420,7 +420,9 @@ public static boolean create(final Configuration conf, final FileSystem fs,
}
try {
// Create the link
return fs.createNewFile(new Path(dstFamilyPath, name));
if(fs.createNewFile(new Path(dstFamilyPath, name))){
return name;
}
} catch (IOException e) {
LOG.error("couldn't create the link=" + name + " for " + dstFamilyPath, e);
// Revert the reference if the link creation failed
Expand All @@ -429,25 +431,7 @@ public static boolean create(final Configuration conf, final FileSystem fs,
}
throw e;
}
}

/**
* Create a new HFileLink starting from a hfileLink name
*
* <p>It also adds a back-reference to the hfile back-reference directory
* to simplify the reference-count and the cleaning process.
*
* @param conf {@link Configuration} to read for the archive directory name
* @param fs {@link FileSystem} on which to write the HFileLink
* @param dstFamilyPath - Destination path (table/region/cf/)
* @param hfileLinkName - HFileLink name (it contains hfile-region-table)
* @return true if the file is created, otherwise the file exists.
* @throws IOException on file or parent directory creation failure
*/
public static boolean createFromHFileLink(final Configuration conf, final FileSystem fs,
final Path dstFamilyPath, final String hfileLinkName)
throws IOException {
return createFromHFileLink(conf, fs, dstFamilyPath, hfileLinkName, true);
throw new IOException("File link=" + name + " already exists under " + dstFamilyPath + " folder.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We changed the behavior here? In the past it could return false but now it will throw an exception?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All these create variations are only used by RestoreSnapshotHelper.restoreStoreFile, which was not doing any use of the boolean value returned. Now we need to return the link name, and in the event of this link creation failure, isn't it better to halt the whole snapshot clone/restore operation?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, so let's add more javadoc to say it is only used when restoring snapshot, so later people will not misuse it.

}

/**
Expand All @@ -464,7 +448,7 @@ public static boolean createFromHFileLink(final Configuration conf, final FileSy
* @return true if the file is created, otherwise the file exists.
* @throws IOException on file or parent directory creation failure
*/
public static boolean createFromHFileLink(final Configuration conf, final FileSystem fs,
public static String createFromHFileLink(final Configuration conf, final FileSystem fs,
wchevreuil marked this conversation as resolved.
Show resolved Hide resolved
final Path dstFamilyPath, final String hfileLinkName, final boolean createBackRef)
throws IOException {
Matcher m = LINK_NAME_PATTERN.matcher(hfileLinkName);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,8 @@
import org.apache.hadoop.hbase.regionserver.HRegionFileSystem;
import org.apache.hadoop.hbase.regionserver.HStoreFile;
import org.apache.hadoop.hbase.regionserver.StoreFileInfo;
import org.apache.hadoop.hbase.regionserver.storefiletracker.StoreFileTracker;
import org.apache.hadoop.hbase.regionserver.storefiletracker.StoreFileTrackerFactory;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.util.CommonFSUtils;
import org.apache.hadoop.hbase.wal.WALSplitUtil;
Expand Down Expand Up @@ -584,40 +586,45 @@ private void createMergedRegion(final MasterProcedureEnv env) throws IOException
final MasterFileSystem mfs = env.getMasterServices().getMasterFileSystem();
final Path tableDir = CommonFSUtils.getTableDir(mfs.getRootDir(), regionsToMerge[0].getTable());
final FileSystem fs = mfs.getFileSystem();

List<Path> mergedFiles = new ArrayList<>();
HRegionFileSystem mergeRegionFs = HRegionFileSystem.createRegionOnFileSystem(
env.getMasterConfiguration(), fs, tableDir, mergedRegion);

for (RegionInfo ri: this.regionsToMerge) {
HRegionFileSystem regionFs = HRegionFileSystem.openRegionFromFileSystem(
env.getMasterConfiguration(), fs, tableDir, ri, false);
mergeStoreFiles(env, regionFs, mergeRegionFs, mergedRegion);
mergedFiles.addAll(mergeStoreFiles(env, regionFs, mergeRegionFs, mergedRegion));
}
assert mergeRegionFs != null;
mergeRegionFs.commitMergedRegion();
mergeRegionFs.commitMergedRegion(mergedFiles, env);

// Prepare to create merged regions
env.getAssignmentManager().getRegionStates().
getOrCreateRegionStateNode(mergedRegion).setState(State.MERGING_NEW);
}

private void mergeStoreFiles(MasterProcedureEnv env, HRegionFileSystem regionFs,
private List<Path> mergeStoreFiles(MasterProcedureEnv env, HRegionFileSystem regionFs,
HRegionFileSystem mergeRegionFs, RegionInfo mergedRegion) throws IOException {
final TableDescriptor htd = env.getMasterServices().getTableDescriptors()
.get(mergedRegion.getTable());
List<Path> mergedFiles = new ArrayList<>();
for (ColumnFamilyDescriptor hcd : htd.getColumnFamilies()) {
String family = hcd.getNameAsString();
final Collection<StoreFileInfo> storeFiles = regionFs.getStoreFiles(family);
StoreFileTracker tracker =
StoreFileTrackerFactory.create(env.getMasterConfiguration(), htd, hcd, regionFs);
final Collection<StoreFileInfo> storeFiles = tracker.load();
if (storeFiles != null && storeFiles.size() > 0) {
for (StoreFileInfo storeFileInfo : storeFiles) {
// Create reference file(s) to parent region file here in mergedDir.
// As this procedure is running on master, use CacheConfig.DISABLED means
// don't cache any block.
mergeRegionFs.mergeStoreFile(regionFs.getRegionInfo(), family,
Path refFile = mergeRegionFs.mergeStoreFile(regionFs.getRegionInfo(), family,
new HStoreFile(storeFileInfo, hcd.getBloomFilterType(), CacheConfig.DISABLED));
mergedFiles.add(refFile);
}
}
}
return mergedFiles;
}

/**
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,8 @@
import org.apache.hadoop.hbase.regionserver.RegionSplitPolicy;
import org.apache.hadoop.hbase.regionserver.RegionSplitRestriction;
import org.apache.hadoop.hbase.regionserver.StoreFileInfo;
import org.apache.hadoop.hbase.regionserver.storefiletracker.StoreFileTracker;
import org.apache.hadoop.hbase.regionserver.storefiletracker.StoreFileTrackerFactory;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.util.CommonFSUtils;
import org.apache.hadoop.hbase.util.EnvironmentEdgeManager;
Expand Down Expand Up @@ -618,21 +620,20 @@ public void createDaughterRegions(final MasterProcedureEnv env) throws IOExcepti
final FileSystem fs = mfs.getFileSystem();
HRegionFileSystem regionFs = HRegionFileSystem.openRegionFromFileSystem(
env.getMasterConfiguration(), fs, tabledir, getParentRegion(), false);

regionFs.createSplitsDir(daughterOneRI, daughterTwoRI);

Pair<Integer, Integer> expectedReferences = splitStoreFiles(env, regionFs);
Pair<List<Path>, List<Path>> expectedReferences = splitStoreFiles(env, regionFs);

assertReferenceFileCount(fs, expectedReferences.getFirst(),
assertReferenceFileCount(fs, expectedReferences.getFirst().size(),
regionFs.getSplitsDir(daughterOneRI));
regionFs.commitDaughterRegion(daughterOneRI);
assertReferenceFileCount(fs, expectedReferences.getFirst(),
regionFs.commitDaughterRegion(daughterOneRI, expectedReferences.getFirst(), env);
assertReferenceFileCount(fs, expectedReferences.getFirst().size(),
new Path(tabledir, daughterOneRI.getEncodedName()));

assertReferenceFileCount(fs, expectedReferences.getSecond(),
assertReferenceFileCount(fs, expectedReferences.getSecond().size(),
regionFs.getSplitsDir(daughterTwoRI));
regionFs.commitDaughterRegion(daughterTwoRI);
assertReferenceFileCount(fs, expectedReferences.getSecond(),
regionFs.commitDaughterRegion(daughterTwoRI, expectedReferences.getSecond(), env);
assertReferenceFileCount(fs, expectedReferences.getSecond().size(),
new Path(tabledir, daughterTwoRI.getEncodedName()));
}

Expand All @@ -649,7 +650,7 @@ private void deleteDaughterRegions(final MasterProcedureEnv env) throws IOExcept
* Create Split directory
* @param env MasterProcedureEnv
*/
private Pair<Integer, Integer> splitStoreFiles(final MasterProcedureEnv env,
private Pair<List<Path>, List<Path>> splitStoreFiles(final MasterProcedureEnv env,
final HRegionFileSystem regionFs) throws IOException {
final Configuration conf = env.getMasterConfiguration();
TableDescriptor htd = env.getMasterServices().getTableDescriptors().get(getTableName());
Expand All @@ -665,7 +666,9 @@ private Pair<Integer, Integer> splitStoreFiles(final MasterProcedureEnv env,
new HashMap<String, Collection<StoreFileInfo>>(htd.getColumnFamilyCount());
for (ColumnFamilyDescriptor cfd : htd.getColumnFamilies()) {
String family = cfd.getNameAsString();
Collection<StoreFileInfo> sfis = regionFs.getStoreFiles(family);
StoreFileTracker tracker =
StoreFileTrackerFactory.create(env.getMasterConfiguration(), htd, cfd, regionFs);
Collection<StoreFileInfo> sfis = tracker.load();
if (sfis == null) {
continue;
}
Expand All @@ -691,7 +694,7 @@ private Pair<Integer, Integer> splitStoreFiles(final MasterProcedureEnv env,
}
if (nbFiles == 0) {
// no file needs to be splitted.
return new Pair<Integer, Integer>(0, 0);
return new Pair<>(Collections.emptyList(), Collections.emptyList());
}
// Max #threads is the smaller of the number of storefiles or the default max determined above.
int maxThreads = Math.min(
Expand Down Expand Up @@ -744,14 +747,18 @@ private Pair<Integer, Integer> splitStoreFiles(final MasterProcedureEnv env,
throw (InterruptedIOException) new InterruptedIOException().initCause(e);
}

int daughterA = 0;
int daughterB = 0;
List<Path> daughterA = new ArrayList<>();
List<Path> daughterB = new ArrayList<>();
// Look for any exception
for (Future<Pair<Path, Path>> future : futures) {
try {
Pair<Path, Path> p = future.get();
daughterA += p.getFirst() != null ? 1 : 0;
daughterB += p.getSecond() != null ? 1 : 0;
if(p.getFirst() != null){
daughterA.add(p.getFirst());
}
if(p.getSecond() != null){
daughterB.add(p.getSecond());
}
} catch (InterruptedException e) {
throw (InterruptedIOException) new InterruptedIOException().initCause(e);
} catch (ExecutionException e) {
Expand All @@ -764,7 +771,7 @@ private Pair<Integer, Integer> splitStoreFiles(final MasterProcedureEnv env,
getParentRegion().getShortNameToLog() + " Daughter A: " + daughterA +
" storefiles, Daughter B: " + daughterB + " storefiles.");
}
return new Pair<Integer, Integer>(daughterA, daughterB);
return new Pair<>(daughterA, daughterB);
}

private void assertReferenceFileCount(final FileSystem fs, final int expectedReferenceFileCount,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -453,56 +453,24 @@ private List<RegionInfo> createFsLayout(
List<RegionInfo> newRegions,
final CreateHdfsRegions hdfsRegionHandler) throws IOException {
final MasterFileSystem mfs = env.getMasterServices().getMasterFileSystem();
final Path tempdir = mfs.getTempDir();
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Snapshots recover is another feature that was relying on temdirs & renames. I took the decision to allow restored dirs being created on the final path already, my understanding is that tables being restored/cloned will not be enabled, and if the snapshot fails at this stage, it will not get to the meta updates stages, meaning there will be no inconsistencies. There would be the need to identify and cleanout leftovers of failed snapshot recoveries. Any thoughts/suggestions?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, I think this is similiar to the merge/split scenario. There is no reason that we can not do this. And in #3716 , @frostruan is trying to implement the snapshot operationby proc-v2, so I think we could implement the clean up logic in the rollback of the procedure.


// 1. Create Table Descriptor
// using a copy of descriptor, table will be created enabling first
final Path tempTableDir = CommonFSUtils.getTableDir(tempdir, tableDescriptor.getTableName());
if (CommonFSUtils.isExists(mfs.getFileSystem(), tempTableDir)) {
final Path tableDir = CommonFSUtils.getTableDir(mfs.getRootDir(), tableDescriptor.getTableName());
if (CommonFSUtils.isExists(mfs.getFileSystem(), tableDir)) {
// if the region dirs exist, will cause exception and unlimited retry (see HBASE-24546)
LOG.warn("temp table dir already exists on disk: {}, will be deleted.", tempTableDir);
CommonFSUtils.deleteDirectory(mfs.getFileSystem(), tempTableDir);
LOG.warn("temp table dir already exists on disk: {}, will be deleted.", tableDir);
CommonFSUtils.deleteDirectory(mfs.getFileSystem(), tableDir);
Apache9 marked this conversation as resolved.
Show resolved Hide resolved
}
((FSTableDescriptors) (env.getMasterServices().getTableDescriptors()))
.createTableDescriptorForTableDirectory(tempTableDir,
TableDescriptorBuilder.newBuilder(tableDescriptor).build(), false);
((FSTableDescriptors)(env.getMasterServices().getTableDescriptors()))
.createTableDescriptorForTableDirectory(tableDir,
TableDescriptorBuilder.newBuilder(tableDescriptor).build(), false);

// 2. Create Regions
newRegions = hdfsRegionHandler.createHdfsRegions(
env, tempdir, tableDescriptor.getTableName(), newRegions);

// 3. Move Table temp directory to the hbase root location
CreateTableProcedure.moveTempDirectoryToHBaseRoot(env, tableDescriptor, tempTableDir);
// Move Table temp mob directory to the hbase root location
Path tempMobTableDir = MobUtils.getMobTableDir(tempdir, tableDescriptor.getTableName());
if (mfs.getFileSystem().exists(tempMobTableDir)) {
moveTempMobDirectoryToHBaseRoot(mfs, tableDescriptor, tempMobTableDir);
}
return newRegions;
}
env, mfs.getRootDir(), tableDescriptor.getTableName(), newRegions);

/**
* Move table temp mob directory to the hbase root location
* @param mfs The master file system
* @param tableDescriptor The table to operate on
* @param tempMobTableDir The temp mob directory of table
* @throws IOException If failed to move temp mob dir to hbase root dir
*/
private void moveTempMobDirectoryToHBaseRoot(final MasterFileSystem mfs,
final TableDescriptor tableDescriptor, final Path tempMobTableDir) throws IOException {
FileSystem fs = mfs.getFileSystem();
final Path tableMobDir =
MobUtils.getMobTableDir(mfs.getRootDir(), tableDescriptor.getTableName());
if (!fs.delete(tableMobDir, true) && fs.exists(tableMobDir)) {
throw new IOException("Couldn't delete mob table " + tableMobDir);
}
if (!fs.exists(tableMobDir.getParent())) {
fs.mkdirs(tableMobDir.getParent());
}
if (!fs.rename(tempMobTableDir, tableMobDir)) {
throw new IOException("Unable to move mob table from temp=" + tempMobTableDir
+ " to hbase root=" + tableMobDir);
}
return newRegions;
}

/**
Expand Down