Skip to content
This repository has been archived by the owner on Jul 15, 2019. It is now read-only.

Commit

Permalink
Fix conflict
Browse files Browse the repository at this point in the history
  • Loading branch information
ashutoshc committed Jan 20, 2011
2 parents b3b0813 + 32758fd commit 32eacdf
Show file tree
Hide file tree
Showing 280 changed files with 79,613 additions and 21,770 deletions.
99 changes: 99 additions & 0 deletions CHANGES.txt
Expand Up @@ -6,6 +6,8 @@ Trunk - Unreleased

NEW FEATURES

HIVE-1790. Support HAVING clause in Hive (Vaibhav Aggarwal via cws)

HIVE-1304. Add function row_sequence in contrib (John Sichi via namit)

HIVE-1405. Add ability to run an initialization script by 'hive -i <fileName>' (John Sichi via namit)
Expand Down Expand Up @@ -127,8 +129,34 @@ Trunk - Unreleased
HIVE-842 Authentication Infrastructure for Hive
(Ashutosh Chauhan via He Yongqiang)

HIVE-1853 Downrgrade JDO (Paul Yang via namit)

HIVE-1835 Better auto-complete for Hive
(Paul Butler via Ning Zhang)

HIVE-1856 Implement DROP TABLE/VIEW IF EXISTS
(Marcel Kornacker via jvs)

HIVE-1858 Implement DROP {PARTITION, INDEX, TEMPORARY FUNCTION} IF EXISTS
(Marcel Kornacker via jvs)

HIVE-78 Authorization model for Hive
(Yongqiang He via namit)

HIVE-1696 Add delegation token support to metastore
(Devaraj Das via namit)

HIVE-1862 Revive partition filtering in the Hive MetaStore
(Mac Yang via pauly)

IMPROVEMENTS

HIVE-1692. FetchOperator.getInputFormatFromCache hides causal exception (Philip Zeyliger via cws)

HIVE-1899 Add a factory method for creating a synchronized wrapper for IMetaStoreClient (John Sichi via cws)

HIVE-1852 Reduce unnecessary DFSClient.rename() calls (Ning Zhang via jssarma)

HIVE-1712. Migrating metadata from derby to mysql thrown NullPointerException (Jake Farrell via pauly)

HIVE-1394. Do not update transient_lastDdlTime if the partition is modified by a housekeeping
Expand Down Expand Up @@ -306,10 +334,29 @@ Trunk - Unreleased
HIVE-1415: add CLI command for executing a SQL script
(Edward Capriolo via jvs)

HIVE-1855 Include Process ID in the log4j log file name
(Ning Zhang via namit)

HIVE-1878 Set the version of Hive trunk to '0.7.0-SNAPSHOT' to avoid
confusing it with a release
(Carl Steinbach via jvs)

HIVE-1907 Store jobid in ExecDriver
(namit via He Yongqiang)

HIVE-1865 redo zookeeper hive lock manager
(namit via He Yongqiang)

OPTIMIZATIONS

BUG FIXES

HIVE-1915. Authorization on database level is broken.
(He Yongqiang via cws)

HIVE-1203. HiveInputFormat.getInputFormatFromCache "swallows" cause exception when trowing IOExcpetion
(Vladimir Klimontovich via cws)

HIVE-1524. Parallel Execution fails if mapred.job.name is set
(Ning Zhang via jssarma)

Expand Down Expand Up @@ -617,6 +664,55 @@ Trunk - Unreleased
HIVE-1845 Some attributes in eclipse template file are deprecated
(Liyin Tang via namit)

HIVE-1854 Temporarily disable metastore tests for listPartitionsByFilter()
(Paul Yang via namit)

HIVE-1857 mixed case tablename on lefthand side of LATERAL VIEW results in
query failing with confusing error message (John Sichi via pauly)

HIVE-1456 No need to check for LOG as null in sort-merge join
(Alexey Diomin via namit)

HIVE-1806 Merge per dynamic partition based on size of each dynamic partition
(Ning Zhang via namit)

HIVE-1864 Fix test load_overwrite..q
(Carl Steinbach via namit)

HIVE-1870 Add TestRemoveHiveMetaStore deleted accidently
(Carl Steinbach via namit)

HIVE-1873 Fix 'tar' build target broken in HIVE-1526
(Carl Steinbach via namit)

HIVE-1874 fix HBase filter pushdown broken by HIVE-1638
(John Sichi via namit)

HIVE-1871 Bug in merging dynamic partitions introduced by HIVE-1806
(He Yongqiag via namit)

HIVE-1881 Add an option to use FsShell to delete dir in warehouse
(He Yongqiang via namit)

HIVE-1840 Support ALTER DATABASE to change database properties
(Ning Zhang via namit)

HIVE-1889 add an option (hive.index.compact.file.ignore.hdfs)
to ignore HDFS location stored in index files
(Yongqiang He via namit)

HIVE-1903 Can't join HBase tables if one's name is the beginning of
the other (John Sichi via namit)

HIVE-1912 Double escaping special chars when removing old partitions
in rmr (Ning Zhang via namit)

HIVE-1913 use partition level serde properties
(Yongiang He via namit)

HIVE-1917 CTAS (create-table-as-select) throws exception when showing
results (Ning Zhang via namit)

TESTS

HIVE-1464. improve test query performance
Expand All @@ -636,6 +732,9 @@ Trunk - Unreleased
HIVE-1658. Fix describe [extended] column formatting
(Thiruvel Thirumoolan via Ning Zhang)

HIVE-1829. Fix intermittent failures in TestRemoteMetaStore
(Carl Steinbach via jvs)

TASKS

HIVE-1526. Hive should depend on a release version of Thrift
Expand Down
7 changes: 7 additions & 0 deletions build-common.xml
Expand Up @@ -453,6 +453,13 @@
</junit>
<fail if="tests.failed">Tests failed!</fail>
</target>
<target name="test-shims">
<subant target="test">
<property name="hadoop.version" value="${hadoop.security.version}"/>
<property name="hadoop.security.version" value="${hadoop.security.version}"/>
<fileset dir="${hive.root}/shims" includes="build.xml"/>
</subant>
</target>

<target name="clean-test">
<delete dir="${test.build.dir}"/>
Expand Down
4 changes: 2 additions & 2 deletions build.properties
@@ -1,7 +1,7 @@
Name=Hive
name=hive
version=0.7.0
year=2010
version=0.7.0-SNAPSHOT
year=2011

javac.debug=on
javac.version=1.6
Expand Down
16 changes: 14 additions & 2 deletions build.xml
Expand Up @@ -126,6 +126,17 @@
</sequential>
</macrodef>

<macrodef name="iterate-test-dirs">
<attribute name="target"/>
<sequential>
<subant target="@{target}">
<property name="build.dir.hive" location="${build.dir.hive}"/>
<property name="is-offline" value="${is-offline}"/>
<filelist dir="." files="common/build.xml,serde/build.xml,metastore/build.xml,ql/build.xml,cli/build.xml,contrib/build.xml,service/build.xml,jdbc/build.xml,hwi/build.xml${hbase.iterate}"/>
</subant>
</sequential>
</macrodef>

<macrodef name="iterate">
<attribute name="target"/>
<sequential>
Expand Down Expand Up @@ -207,7 +218,8 @@
<target name="test"
depends="clean-test,jar"
description="Run tests">
<iterate target="test"/>
<antcall target="test-shims"/>
<iterate-test-dirs target="test"/>
</target>

<!-- create an html report from junit output files -->
Expand Down Expand Up @@ -497,7 +509,7 @@
<packageset dir="ql/src/java"/>
<packageset dir="ql/src/test"/>
<packageset dir="ql/src/gen/thrift/gen-javabean"/>
<packageset dir="${build.dir.hive}/ql/gen-java"/>
<packageset dir="${build.dir.hive}/ql/gen/antlr/gen-java"/>
<packageset dir="shims/src/common/java"/>

<packageset dir="howl/src/java"/>
Expand Down
76 changes: 71 additions & 5 deletions cli/src/java/org/apache/hadoop/hive/cli/CliDriver.java
Expand Up @@ -32,7 +32,10 @@
import java.util.Map;
import java.util.Set;

import jline.Completor;
import jline.ArgumentCompletor;
import jline.ArgumentCompletor.ArgumentDelimiter;
import jline.ArgumentCompletor.AbstractArgumentDelimiter;
import jline.ConsoleReader;
import jline.History;
import jline.SimpleCompletor;
Expand All @@ -43,6 +46,8 @@
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hive.conf.HiveConf;
import org.apache.hadoop.hive.ql.Driver;
import org.apache.hadoop.hive.ql.parse.ParseDriver;
import org.apache.hadoop.hive.ql.exec.FunctionRegistry;
import org.apache.hadoop.hive.ql.exec.Utilities;
import org.apache.hadoop.hive.ql.exec.Utilities.StreamPrinter;
import org.apache.hadoop.hive.ql.processors.CommandProcessor;
Expand Down Expand Up @@ -295,6 +300,71 @@ public void processInitFiles(CliSessionState ss) throws IOException {
ss.setIsSilent(saveSilent);
}

public static Completor getCommandCompletor () {
// SimpleCompletor matches against a pre-defined wordlist
// We start with an empty wordlist and build it up
SimpleCompletor sc = new SimpleCompletor(new String[0]);

// We add Hive function names
// For functions that aren't infix operators, we add an open
// parenthesis at the end.
for (String s : FunctionRegistry.getFunctionNames()) {
if (s.matches("[a-z_]+")) {
sc.addCandidateString(s + "(");
} else {
sc.addCandidateString(s);
}
}

// We add Hive keywords, including lower-cased versions
for (String s : ParseDriver.getKeywords()) {
sc.addCandidateString(s);
sc.addCandidateString(s.toLowerCase());
}

// Because we use parentheses in addition to whitespace
// as a keyword delimiter, we need to define a new ArgumentDelimiter
// that recognizes parenthesis as a delimiter.
ArgumentDelimiter delim = new AbstractArgumentDelimiter () {
public boolean isDelimiterChar (String buffer, int pos) {
char c = buffer.charAt(pos);
return (Character.isWhitespace(c) || c == '(' || c == ')' ||
c == '[' || c == ']');
}
};

// The ArgumentCompletor allows us to match multiple tokens
// in the same line.
final ArgumentCompletor ac = new ArgumentCompletor(sc, delim);
// By default ArgumentCompletor is in "strict" mode meaning
// a token is only auto-completed if all prior tokens
// match. We don't want that since there are valid tokens
// that are not in our wordlist (eg. table and column names)
ac.setStrict(false);

// ArgumentCompletor always adds a space after a matched token.
// This is undesirable for function names because a space after
// the opening parenthesis is unnecessary (and uncommon) in Hive.
// We stack a custom Completor on top of our ArgumentCompletor
// to reverse this.
Completor completor = new Completor () {
public int complete (String buffer, int offset, List completions) {
List<String> comp = (List<String>) completions;
int ret = ac.complete(buffer, offset, completions);
// ConsoleReader will do the substitution if and only if there
// is exactly one valid completion, so we ignore other cases.
if (completions.size() == 1) {
if (comp.get(0).endsWith("( ")) {
comp.set(0, comp.get(0).trim());
}
}
return ret;
}
};

return completor;
}

public static void main(String[] args) throws Exception {

OptionsProcessor oproc = new OptionsProcessor();
Expand Down Expand Up @@ -361,11 +431,7 @@ public static void main(String[] args) throws Exception {
ConsoleReader reader = new ConsoleReader();
reader.setBellEnabled(false);
// reader.setDebug(new PrintWriter(new FileWriter("writer.debug", true)));

List<SimpleCompletor> completors = new LinkedList<SimpleCompletor>();
completors.add(new SimpleCompletor(new String[] {"set", "from", "create", "load", "describe",
"quit", "exit"}));
reader.addCompletor(new ArgumentCompletor(completors));
reader.addCompletor(getCommandCompletor());

String line;
final String HISTORYFILE = ".hivehistory";
Expand Down
28 changes: 23 additions & 5 deletions common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
Expand Up @@ -54,7 +54,7 @@ public class HiveConf extends Configuration {
HiveConf.ConfVars.METASTOREDIRECTORY,
HiveConf.ConfVars.METASTOREWAREHOUSE,
HiveConf.ConfVars.METASTOREURIS,
HiveConf.ConfVars.METATORETHRIFTRETRIES,
HiveConf.ConfVars.METASTORETHRIFTRETRIES,
HiveConf.ConfVars.METASTOREPWD,
HiveConf.ConfVars.METASTORECONNECTURLHOOK,
HiveConf.ConfVars.METASTORECONNECTURLKEY,
Expand Down Expand Up @@ -106,6 +106,9 @@ public static enum ConfVars {
// run in local mode only if number of tasks (for map and reduce each) is
// less than this
LOCALMODEMAXTASKS("hive.exec.mode.local.auto.tasks.max", 4),
// if true, DROP TABLE/VIEW does not fail if table/view doesn't exist and IF EXISTS is
// not specified
DROPIGNORESNONEXISTENT("hive.exec.drop.ignorenonexistent", true),

// hadoop stuff
HADOOPBIN("hadoop.bin.path", System.getenv("HADOOP_HOME") + "/bin/hadoop"),
Expand All @@ -125,7 +128,11 @@ public static enum ConfVars {
METASTOREWAREHOUSE("hive.metastore.warehouse.dir", ""),
METASTOREURIS("hive.metastore.uris", ""),
// Number of times to retry a connection to a Thrift metastore server
METATORETHRIFTRETRIES("hive.metastore.connect.retries", 1),
METASTORETHRIFTRETRIES("hive.metastore.connect.retries", 5),
// Number of seconds the client should wait between connection attempts
METASTORE_CLIENT_CONNECT_RETRY_DELAY("hive.metastore.client.connect.retry.delay", 1),
// Socket timeout for the client connection (in seconds)
METASTORE_CLIENT_SOCKET_TIMEOUT("hive.metastore.client.socket.timeout", 20),
METASTOREPWD("javax.jdo.option.ConnectionPassword", ""),
// Class name of JDO connection url hook
METASTORECONNECTURLHOOK("hive.metastore.ds.connection.url.hook", ""),
Expand Down Expand Up @@ -161,6 +168,8 @@ public static enum ConfVars {
// CLI
CLIIGNOREERRORS("hive.cli.errors.ignore", false),

HIVE_METASTORE_FS_HANDLER_CLS("hive.metastore.fs.handler.class", "org.apache.hadoop.hive.metastore.HiveMetaStoreFsImpl"),

// Things we log in the jobconf

// session identifier
Expand Down Expand Up @@ -253,8 +262,8 @@ public static enum ConfVars {
HIVECONVERTJOIN("hive.auto.convert.join", false),
HIVESKEWJOINKEY("hive.skewjoin.key", 1000000),
HIVESKEWJOINMAPJOINNUMMAPTASK("hive.skewjoin.mapjoin.map.tasks", 10000),
HIVESKEWJOINMAPJOINMINSPLIT("hive.skewjoin.mapjoin.min.split", 33554432), //32M
MAPREDMINSPLITSIZE("mapred.min.split.size", 1),
HIVESKEWJOINMAPJOINMINSPLIT("hive.skewjoin.mapjoin.min.split", 33554432L), //32M
MAPREDMINSPLITSIZE("mapred.min.split.size", 1L),
HIVEMERGEMAPONLY("hive.mergejob.maponly", true),

HIVESENDHEARTBEAT("hive.heartbeat.interval", 1000),
Expand Down Expand Up @@ -308,6 +317,7 @@ public static enum ConfVars {
HIVE_ZOOKEEPER_CLIENT_PORT("hive.zookeeper.client.port", ""),
HIVE_ZOOKEEPER_SESSION_TIMEOUT("hive.zookeeper.session.timeout", 600*1000),
HIVE_ZOOKEEPER_NAMESPACE("hive.zookeeper.namespace", "hive_zookeeper_namespace"),
HIVE_ZOOKEEPER_CLEAN_EXTRA_NODES("hive.zookeeper.clean.extra.nodes", false),

// For HBase storage handler
HIVE_HBASE_WAL_ENABLED("hive.hbase.wal.enabled", true),
Expand All @@ -325,11 +335,19 @@ public static enum ConfVars {

SEMANTIC_ANALYZER_HOOK("hive.semantic.analyzer.hook",null),

HIVE_AUTHORIZATION_ENABLED("hive.security.authorization.enabled", false),
HIVE_AUTHORIZATION_MANAGER("hive.security.authorization.manager", null),
HIVE_AUTHENTICATOR_MANAGER("hive.security.authenticator.manager", null),

HIVE_AUTHORIZATION_TABLE_USER_GRANTS("hive.security.authorization.createtable.user.grants", null),
HIVE_AUTHORIZATION_TABLE_GROUP_GRANTS("hive.security.authorization.createtable.group.grants", null),
HIVE_AUTHORIZATION_TABLE_ROLE_GRANTS("hive.security.authorization.createtable.role.grants", null),
// Print column names in output
HIVE_CLI_PRINT_HEADER("hive.cli.print.header", false),

HIVE_ERROR_ON_EMPTY_PARTITION("hive.error.on.empty.partition", false);
HIVE_ERROR_ON_EMPTY_PARTITION("hive.error.on.empty.partition", false),

HIVE_INDEX_IGNORE_HDFS_LOC("hive.index.compact.file.ignore.hdfs", false),
;


Expand Down

0 comments on commit 32eacdf

Please sign in to comment.