Skip to content
This repository has been archived by the owner on May 12, 2021. It is now read-only.

METRON-1004: Travis CI - Job Exceeded Maximum Time Limit #624

Closed
wants to merge 43 commits into from
Closed
Show file tree
Hide file tree
Changes from 15 commits
Commits
Show all changes
43 commits
Select commit Hold shift + click to select a range
bd484b5
first pass. Still need to fix storm kafka interaction + possibly cleanup
justinleet Jun 23, 2017
a470063
fixes plus logging
justinleet Jun 23, 2017
05e0a64
more stuffs
justinleet Jun 23, 2017
05b9c1f
more fixes and update
justinleet Jun 24, 2017
912575e
removing error code
justinleet Jun 24, 2017
11473f6
maybe making it work for superclasses like the parser tests. Unsure …
justinleet Jun 24, 2017
fa5c052
Removing approximately 1M log errors by actually cleaning up ZK
justinleet Jun 25, 2017
a5b152e
empty
justinleet Jun 25, 2017
c9f072b
Removing our artifacts before caching
justinleet Jun 26, 2017
d4ab212
Trying to flush cache
justinleet Jun 26, 2017
f2979c2
Undo flush
justinleet Jun 26, 2017
97a679e
Adding logging to try to figure out what's going on
justinleet Jun 23, 2017
aa837e0
Adding time to integration test command
justinleet Jun 26, 2017
b1ede14
trimming tests down.
cestella Jun 27, 2017
6562246
Removing jacoco from Travis build
justinleet Jun 27, 2017
ad27176
Properly handle clearing out Metron artifacts from Maven so we don't …
justinleet Jun 27, 2017
6824ca9
Move unit test to use mock htable rather than real hbase.
cestella Jun 28, 2017
8eeec06
Parallelizing the conditions to the STIX extractor test should speed …
cestella Jun 28, 2017
feab3ce
Config functions test should reuse the zookeeper instance.
cestella Jun 28, 2017
fafa57d
FSFunctionsTest should reuse infrastructure rather than spinning up h…
cestella Jun 28, 2017
1c98ae4
Refactored HBaseClientTest to not delete tables, but rather just issu…
cestella Jun 28, 2017
9033f49
Setting global cache for npm and removing int tests for initial run.
cestella Jun 28, 2017
a6f2e9f
Updating travis.
cestella Jun 28, 2017
46aacd7
Updating travis to cache all forms of npm cache.
cestella Jun 28, 2017
a8368fe
Updating.
cestella Jun 28, 2017
e208bca
updating travis again.
cestella Jun 28, 2017
fdddd6d
Updating travis.
cestella Jun 28, 2017
de042bf
travis update
cestella Jun 28, 2017
cb930fc
adding longer timeout for great success.
cestella Jun 28, 2017
af9f186
removing quiet mode
cestella Jun 28, 2017
0e3cf34
Resetting cache.
cestella Jun 28, 2017
9e0911e
Removed extraneous shell scripts.
cestella Jun 28, 2017
e8e13a8
Migrating the parser integration tests to quasi-unit-tests.
cestella Jun 29, 2017
195960b
allowing grok parsers to work too.
cestella Jun 29, 2017
87c21c0
Making the kafka integration test more resilient.
cestella Jun 29, 2017
63cad19
Adding integration test document for parsers.
cestella Jun 29, 2017
5c9780e
Trying out a VM instead of container
justinleet Jun 29, 2017
6c8fe98
responding to review comment
justinleet Jun 29, 2017
730c1c4
Adding back in PcapTopologyIntegrationTest.testTimestampInPacket, but…
justinleet Jun 29, 2017
bb6007b
kafka weirdness fixed (maybe) (#13)
cestella Jun 29, 2017
ca1a9e6
Removing extraneous field from earlier testing
justinleet Jun 29, 2017
1621a82
Kafka embedded server only started/stopped in KafkaControllerIntegrat…
merrimanr Jun 29, 2017
232703f
Couple review comments
justinleet Jun 30, 2017
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,9 @@ before_install:
- export PATH=$M2_HOME/bin:$PATH
script:
- |
Copy link
Contributor

@ottobackwards ottobackwards Jun 27, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because it makes it slow right?
Can we document with the commits, as you go, the rationale behind the changes, so we can look back and understand a little bit?

"why did we get rid of FOO?"
Let me check the commit log

" Remove foo. It is seen to cause an increase of X in Y and do z. it is also pretty snarky and fresh"

"Oh, that makes sense"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, my bad. Usually I tend to consider the commits less important because it's usually a full feature, and it's just minor changes / fixes afterwards.

I'll try to make sure the messages are easier to follow, since this is pretty ongoing until it's consistent.

time mvn -q -T 2C -DskipTests install && time mvn -q -T 2C jacoco:prepare-agent surefire:test@unit-tests && mvn -q jacoco:prepare-agent surefire:test@integration-tests && time mvn -q jacoco:prepare-agent test --projects metron-interface/metron-config && time build_utils/verify_licenses.sh
time mvn -q -T 2C -DskipTests install && time mvn -q -T 2C surefire:test@unit-tests && time mvn -q surefire:test@integration-tests && time mvn -q test --projects metron-interface/metron-config && time build_utils/verify_licenses.sh
before_cache:
- rm -rf $HOME/.m2/org/apache/metron
cache:
directories:
- $HOME/.m2
Original file line number Diff line number Diff line change
Expand Up @@ -33,17 +33,14 @@
import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.CuratorFrameworkFactory;
import org.apache.curator.retry.ExponentialBackoffRetry;
import org.apache.curator.test.TestingServer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.net.NetUtils;
import org.apache.hadoop.util.JarFinder;
import org.apache.hadoop.util.Shell;
import org.apache.hadoop.yarn.api.records.ApplicationReport;
import org.apache.hadoop.yarn.api.records.YarnApplicationState;
import org.apache.hadoop.yarn.client.api.YarnClient;
import org.apache.hadoop.yarn.conf.YarnConfiguration;
import org.apache.hadoop.yarn.server.MiniYARNCluster;
import org.apache.metron.integration.ComponentRunner;
import org.apache.metron.integration.components.YarnComponent;
import org.apache.metron.integration.components.ZKServerComponent;
Expand All @@ -57,25 +54,25 @@
import org.apache.metron.test.utils.UnitTestHelper;
import org.apache.zookeeper.KeeperException;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Assert;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

public class MaasIntegrationTest {
private static final Log LOG =
LogFactory.getLog(MaasIntegrationTest.class);
private CuratorFramework client;
private ComponentRunner runner;
private YarnComponent yarnComponent;
private ZKServerComponent zkServerComponent;
@Before
public void setup() throws Exception {
private static CuratorFramework client;
private static ComponentRunner runner;
private static YarnComponent yarnComponent;
private static ZKServerComponent zkServerComponent;

@BeforeClass
public static void setupBeforeClass() throws Exception {
UnitTestHelper.setJavaLoggingLevel(Level.SEVERE);
LOG.info("Starting up YARN cluster");

Map<String, String> properties = new HashMap<>();
zkServerComponent = new ZKServerComponent();

yarnComponent = new YarnComponent().withApplicationMasterClass(ApplicationMaster.class).withTestName(MaasIntegrationTest.class.getSimpleName());

runner = new ComponentRunner.Builder()
Expand All @@ -92,14 +89,19 @@ public void setup() throws Exception {
client.start();
}

@After
public void tearDown(){
@AfterClass
public static void tearDownAfterClass(){
if(client != null){
client.close();
}
runner.stop();
}

@After
public void tearDown() {
runner.reset();
}

@Test(timeout=900000)
public void testMaaSWithDomain() throws Exception {
testDSShell(true);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
package org.apache.metron.profiler.integration;

import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.imps.CuratorFrameworkState;
import org.apache.metron.integration.InMemoryComponent;
import org.apache.metron.integration.UnableToStartException;
import org.apache.metron.integration.components.ZKServerComponent;
Expand Down Expand Up @@ -56,14 +57,25 @@ public void stop() {
// nothing to do
}

public void update()
throws UnableToStartException {
try {
upload();
} catch (Exception e) {
throw new UnableToStartException(e.getMessage(), e);
}
}

/**
* Uploads configuration to Zookeeper.
* @throws Exception
*/
private void upload() throws Exception {
final String zookeeperUrl = topologyProperties.getProperty(ZKServerComponent.ZOOKEEPER_PROPERTY);
try(CuratorFramework client = getClient(zookeeperUrl)) {
client.start();
if(client.getState() != CuratorFrameworkState.STARTED) {
client.start();
}
uploadGlobalConfig(client);
uploadProfilerConfig(client);
}
Expand All @@ -87,7 +99,7 @@ private void uploadProfilerConfig(CuratorFramework client) throws Exception {
* @param client The zookeeper client.
*/
private void uploadGlobalConfig(CuratorFramework client) throws Exception {
if (globalConfiguration == null) {
if (globalConfiguration != null) {
byte[] globalConfig = readGlobalConfigFromFile(globalConfiguration);
if (globalConfig.length > 0) {
writeGlobalConfigToZookeeper(readGlobalConfigFromFile(globalConfiguration), client);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@
import org.apache.metron.hbase.TableProvider;
import org.apache.metron.integration.BaseIntegrationTest;
import org.apache.metron.integration.ComponentRunner;
import org.apache.metron.integration.UnableToStartException;
import org.apache.metron.integration.components.FluxTopologyComponent;
import org.apache.metron.integration.components.KafkaComponent;
import org.apache.metron.integration.components.ZKServerComponent;
Expand All @@ -41,7 +42,10 @@
import org.apache.metron.statistics.OnlineStatisticsProvider;
import org.apache.metron.test.mock.MockHTable;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Assert;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;

import java.io.File;
Expand Down Expand Up @@ -76,7 +80,7 @@ public class ProfilerIntegrationTest extends BaseIntegrationTest {
* }
*/
@Multiline
private String message1;
private static String message1;

/**
* {
Expand All @@ -87,7 +91,7 @@ public class ProfilerIntegrationTest extends BaseIntegrationTest {
* }
*/
@Multiline
private String message2;
private static String message2;

/**
* {
Expand All @@ -98,15 +102,16 @@ public class ProfilerIntegrationTest extends BaseIntegrationTest {
* }
*/
@Multiline
private String message3;
private static String message3;

private ColumnBuilder columnBuilder;
private ZKServerComponent zkComponent;
private FluxTopologyComponent fluxComponent;
private KafkaComponent kafkaComponent;
private List<byte[]> input;
private ComponentRunner runner;
private MockHTable profilerTable;
private static ColumnBuilder columnBuilder;
private static ZKServerComponent zkComponent;
private static FluxTopologyComponent fluxComponent;
private static KafkaComponent kafkaComponent;
private static ConfigUploadComponent configUploadComponent;
private static List<byte[]> input;
private static ComponentRunner runner;
private static MockHTable profilerTable;

private static final String tableName = "profiler";
private static final String columnFamily = "P";
Expand All @@ -133,7 +138,7 @@ public HTableInterface getTable(Configuration config, String tableName) throws I
@Test
public void testExample1() throws Exception {

setup(TEST_RESOURCES + "/config/zookeeper/readme-example-1");
update(TEST_RESOURCES + "/config/zookeeper/readme-example-1");

// start the topology and write test messages to kafka
fluxComponent.submitTopology();
Expand All @@ -158,7 +163,7 @@ public void testExample1() throws Exception {
@Test
public void testExample2() throws Exception {

setup(TEST_RESOURCES + "/config/zookeeper/readme-example-2");
update(TEST_RESOURCES + "/config/zookeeper/readme-example-2");

// start the topology and write test messages to kafka
fluxComponent.submitTopology();
Expand Down Expand Up @@ -191,7 +196,7 @@ public void testExample2() throws Exception {
@Test
public void testExample3() throws Exception {

setup(TEST_RESOURCES + "/config/zookeeper/readme-example-3");
update(TEST_RESOURCES + "/config/zookeeper/readme-example-3");

// start the topology and write test messages to kafka
fluxComponent.submitTopology();
Expand All @@ -216,7 +221,7 @@ public void testExample3() throws Exception {
@Test
public void testExample4() throws Exception {

setup(TEST_RESOURCES + "/config/zookeeper/readme-example-4");
update(TEST_RESOURCES + "/config/zookeeper/readme-example-4");

// start the topology and write test messages to kafka
fluxComponent.submitTopology();
Expand All @@ -239,7 +244,8 @@ public void testExample4() throws Exception {
@Test
public void testPercentiles() throws Exception {

setup(TEST_RESOURCES + "/config/zookeeper/percentiles");
update(TEST_RESOURCES + "/config/zookeeper/percentiles");


// start the topology and write test messages to kafka
fluxComponent.submitTopology();
Expand Down Expand Up @@ -277,9 +283,15 @@ private <T> List<T> read(List<Put> puts, String family, byte[] qualifier, Class<
return results;
}

public void setup(String pathToConfig) throws Exception {
@BeforeClass
public static void setupBeforeClass() throws UnableToStartException {
columnBuilder = new ValueOnlyColumnBuilder(columnFamily);

List<String> inputNew = Stream.of(message1, message2, message3)
.map(m -> Collections.nCopies(5, m))
.flatMap(l -> l.stream())
.collect(Collectors.toList());

// create input messages for the profiler to consume
input = Stream.of(message1, message2, message3)
.map(Bytes::toBytes)
Expand Down Expand Up @@ -320,10 +332,8 @@ public void setup(String pathToConfig) throws Exception {
new KafkaComponent.Topic(outputTopic, 1)));

// upload profiler configuration to zookeeper
ConfigUploadComponent configUploadComponent = new ConfigUploadComponent()
.withTopologyProperties(topologyProperties)
.withGlobalConfiguration(pathToConfig)
.withProfilerConfiguration(pathToConfig);
configUploadComponent = new ConfigUploadComponent()
.withTopologyProperties(topologyProperties);

// load flux definition for the profiler topology
fluxComponent = new FluxTopologyComponent.Builder()
Expand All @@ -345,11 +355,32 @@ public void setup(String pathToConfig) throws Exception {
runner.start();
}

public void update(String path) throws Exception {
configUploadComponent.withGlobalConfiguration(path)
.withProfilerConfiguration(path);
configUploadComponent.update();
}

@AfterClass
public static void tearDownAfterClass() throws Exception {
MockHTable.Provider.clear();
if (runner != null) {
runner.stop();
}
}

@Before
public void setup() {
// create the mock table
profilerTable = (MockHTable) MockHTable.Provider.addToCache(tableName, columnFamily);
}

@After
public void tearDown() throws Exception {
MockHTable.Provider.clear();
profilerTable.clear();
if (runner != null) {
runner.stop();
runner.reset();
}
}
}
Loading