Project

Profile

Help

Bug #7191

./mdbci up DIR/node0 ups all nodes

Added by Ilfat Kinyaev almost 5 years ago. Updated almost 5 years ago.

Status:
Closed
Priority:
High
Category:
mdbci testing
Sprint/Milestone:
Start date:
05.07.2016
Due date:
% Done:

0%

Estimated time:
2:00 h
Target branch:
Test scenario:

# 1. Generate node (echo $? => 0)
./mdbci --template confs/aws.json generate SOME_DIR

# 3. Search "#NONE, due invalid repo name" and see which nodes are incorrect (like SOME_DIR/node0 and SOME_DIR/node1):
grep -r "#NONE, due invalid repo name" ~/mdbci_path

# 2. Up node0 or node1 (echo $? => 0).
./mdbci up SOME_DIR/node0


Description

When it run for one node, it ups all nodes in directory.

aws_up_node0.txt (18.2 KB) aws_up_node0.txt Ilfat Kinyaev, 18.07.2016 15:45
docker_up.txt (65.1 KB) docker_up.txt Ilfat Kinyaev, 18.07.2016 15:45
docker_up_maxscale.txt (18.4 KB) docker_up_maxscale.txt Ilfat Kinyaev, 18.07.2016 15:45

Related issues

Related to [mdbci] Maria DB Continuous integration tool - Bug #7283: Some nodes have invalid repo nameClosed21.07.2016

<a title="Actions" class="icon-only icon-actions js-contextmenu" href="#">Actions</a>

History

#1 Updated by Mark Zaslavskiy almost 5 years ago

  • Priority changed from Normal to High

#2 Updated by Ilfat Kinyaev almost 5 years ago

  • Status changed from New to Active / In progress

#3 Updated by Mark Zaslavskiy almost 5 years ago

  • Status changed from Active / In progress to New

#4 Updated by Ilfat Kinyaev almost 5 years ago

  • Status changed from New to Active / In progress

Check for configs:
- docker
- libvirt
Generate and up with logging.

#5 Updated by Ilfat Kinyaev almost 5 years ago

  • Test scenario updated (diff)

#6 Updated by Mark Zaslavskiy almost 5 years ago

  • Related to Task #7193: Jenkins job clone_configuration added

#7 Updated by Mark Zaslavskiy almost 5 years ago

  • Related to deleted (Task #7193: Jenkins job clone_configuration)

#8 Updated by Ilfat Kinyaev almost 5 years ago

For docker and libvirt not seen.
For vbox - created bug:
For aws - have tihs problem. Log in aws_up_node0.txt . Check vagrant status:

vagranttest@maxscale-jenkins:~/mdbci_kinyaev/mdbci/SOME_DIR4$ vagrant status
Current machine states:

node0 running (aws)
node1 running (aws)
node2 running (aws)
node3 not created (aws)
galera0 not created (aws)
galera1 not created (aws)
galera2 not created (aws)
galera3 not created (aws)
maxscale not created (aws)

#9 Updated by Ilfat Kinyaev almost 5 years ago

  • Status changed from New to Active / In progress

#10 Updated by Ilfat Kinyaev almost 5 years ago

Aws fails because it raise error when it running up, and function ,if see error raise, try up other dead machines.
unless dead_machines.empty?
(1..@attempts).each do |i|
$out.info 'Trying to force restart broken machines'
$out.info "Attempt: #{i}"

Error at:
INFO: > node1: Inappropriate ioctl for device
INFO: > node1: [2016-07-21T07:18:20+00:00] INFO: Forking chef instance to converge...
INFO: > node1: Starting Chef Client, version 12.9.38
INFO: > node1: [2016-07-21T07:18:20+00:00] INFO: * Chef 12.9.38 *
INFO: > node1: [2016-07-21T07:18:20+00:00] INFO: Platform: x86_64-linux
INFO: > node1: [2016-07-21T07:18:20+00:00] INFO: Chef-client pid: 1785
INFO: > node1: [2016-07-21T07:18:28+00:00] INFO: Setting the run_list to ["role[node1]"] from CLI options
INFO: > node1: ================================================================================
INFO: > node1: Error expanding the run_list:
INFO: > node1: ================================================================================
INFO: > node1: Unexpected Error:
INFO: > node1: -----------------
INFO: > node1: Chef::Exceptions::JSON::ParseError: lexical error: invalid char in json text.
INFO: > node1: #NONE, due invalid repo name
INFO: > node1: (right here) ------^
INFO: > node1: Platform:
INFO: > node1: ---------
INFO: > node1: x86_64-linux
INFO: > node1: Running handlers:
INFO: > node1: [2016-07-21T07:18:28+00:00] ERROR: Running exception handlers
INFO: > node1: Running handlers complete
INFO: > node1: [2016-07-21T07:18:28+00:00] ERROR: Exception handlers complete
INFO: > node1: Chef Client failed. 0 resources updated in 08 seconds
INFO: > node1: [2016-07-21T07:18:28+00:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
INFO: > node1: [2016-07-21T07:18:28+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
INFO: > node1: [2016-07-21T07:18:28+00:00] ERROR: lexical error: invalid char in json text.
INFO: > node1: #NONE, due invalid repo name
INFO: > node1: (right here) ------^
INFO: ==> node1: [2016-07-21T07:18:29+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
ERROR: /home/vagranttest/.vagrant.d/gems/gems/vagrant-aws-0.7.0/lib/vagrant-aws/action/run_instance.rb:98: warning: duplicated key at line 100 ignored: :associate_public_ip
ERROR: Chef never successfully completed! Any errors should be visible in the
ERROR: output above. Please fix your recipes so that they properly complete.
ERROR: Bringing up failed
ERROR: exit code 1
WARN: Checking for dead machines and checking Chef runs on machines
INFO: node0 not created (aws)
INFO: node1 running (aws)
Connection to ec2-54-75-91-124.eu-west-1.compute.amazonaws.com closed.
INFO: node2 not created (aws)
INFO: node3 not created (aws)
INFO: galera0 not created (aws)
INFO: galera1 not created (aws)
INFO: galera2 not created (aws)
INFO: galera3 not created (aws)
INFO: maxscale not created (aws)
ERROR: Some machines are dead:
ERROR: node0
ERROR: node2
ERROR: node3
ERROR: galera0
ERROR: galera1
ERROR: galera2
ERROR: galera3
ERROR: maxscale
ERROR: Some machines have broken Chef run:
ERROR: node1
INFO: Trying to force restart broken machines
INFO: Attempt: 1

#11 Updated by Ilfat Kinyaev almost 5 years ago

Error appears at node0 and node1 at aws config.

#12 Updated by Ilfat Kinyaev almost 5 years ago

See SOME_DIR/node0 or SOME_DIR/node1 and SOME_DIR/node2 - at firsts:
#NONE, due invalid repo name

#13 Updated by Ilfat Kinyaev almost 5 years ago

  • Related to Bug #7283: Some nodes have invalid repo name added

#14 Updated by Ilfat Kinyaev almost 5 years ago

  • Status changed from Active / In progress to Review
  • Assignee changed from Ilfat Kinyaev to Mark Zaslavskiy
  • Test scenario updated (diff)

Error with config, see in generation.

#15 Updated by Mark Zaslavskiy almost 5 years ago

  • Status changed from Review to Closed

Also available in: Atom PDF