2
0
mirror of https://github.com/xcat2/xcat-core.git synced 2026-04-07 09:21:29 +00:00

Merge pull request #5706 from xcat2/master

Merge master to 2.14 for 2.14.4 release(1)
This commit is contained in:
zet809
2018-10-16 09:24:19 +08:00
committed by GitHub
120 changed files with 577 additions and 966 deletions

View File

@@ -5,7 +5,6 @@ Go Conserver
programming language. For more information, see https://github.com/xcat2/goconserver/
.. toctree::
:maxdepth: 2
quickstart.rst

View File

@@ -10,5 +10,4 @@ Download the toolkit and prepare the software repository on the xCAT Management
rhels.rst
ubuntu.rst
nvml.rst

View File

@@ -1,15 +0,0 @@
Install NVIDIA Management Library (optional)
============================================
See https://developer.nvidia.com/nvidia-management-library-nvml for more information.
The .run file can be downloaded from NVIDIAs website and placed into the ``/install/postscripts`` directory on the Management Node.
To enable installation of the management library after the node is install, add the .run file to the ``postbootscripts`` attribute for the nodes: ::
# ensure the .run file has execute permission
chmod +x /install/postscripts/<gpu_deployment_kit>.run
# add as the postbootscript
chdef -t node -o <noderange> -p postbootscripts="<gpu_deployment_kit>.run \
--silent --installdir=<your_desired_install_dir>"

View File

@@ -291,8 +291,7 @@ The following section explains how to accomplish this.
packimage rhels7.3-ppc64le-netboot-service
Note: ``enablepart=yes`` in partition file will make partition on the locak disk at every boot time. If you want to keep the contents
on local disk at next boot, remmber changing to ``enablepart=no`` after the first time provisioning.
.. note:: ``enablepart=yes`` in partition file will partition the local disk at every boot. If you want to preserve the contents on local disk at next boot, change to ``enablepart=no`` after the initial provision.
For more information on ``localdisk`` option, refer to :ref:`setup_localdisk_label`
@@ -312,9 +311,9 @@ To update the xCAT software in the image at a later time:
packimage "<osimagename>"
rinstall service osimage="<osimagename>"
Note: The Service Nodes are set up as NFS-root servers for the compute nodes.
Any time changes are made to any compute image on the mgmt node it will be
necessary to sync all changes to all Service Nodes. In our case the
``/install`` directory is mounted on the servicenodes, so the update to the
compute node image is automatically available.
.. note:: The Service Nodes are set up as NFS-root servers for the compute nodes.
Any time changes are made to any compute image on the mgmt node it will be
necessary to sync all changes to all Service Nodes. In our case the
``/install`` directory is mounted on the servicenodes, so the update to the
compute node image is automatically available.

View File

@@ -23,12 +23,12 @@ The switches table will look like this: ::
If there is only one admin and one password for all the switches then put the entry in the xCAT passwd table for the admin id and password to use to login. ::
tabch key=mswitch passwd.username=admin passwd.password=admin
tabch key=switch passwd.username=admin passwd.password=admin
The passwd table will look like this: ::
#key,username,password,cryptmethod,comments,disable
"mswitch","admin","admin",,,
"switch","admin","admin",,,
Setup ssh connection to the Mellanox Switch
-------------------------------------------

View File

@@ -1,5 +1,5 @@
Enable the HTTPS protocol for REST API
=====================================
======================================
To improve the security between the REST API clients and server, enabling the secure transfer protocol (https) is the default configuration.

View File

@@ -59,7 +59,7 @@ author = u'IBM Corporation'
# The short X.Y version.
version = '2'
# The full version, including alpha/beta/rc tags.
release = '2.14.3'
release = '2.14.4'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.

View File

@@ -120,6 +120,7 @@ xCAT provides several functions that can simplify regular expressions.
+-----------+-----------+-----------+-----------+
| row2-col1 | row2-col2 | row2-col3 | row2-col4 |
+-----------+-----------+-----------+-----------+
To obtain the rack index, use ``|row(\d+)-col(\d+)|(dim2idx($1, 4, $2))|``. This maps the racks to:
+---+---+---+---+
@@ -127,6 +128,7 @@ xCAT provides several functions that can simplify regular expressions.
+---+---+---+---+
| 5 | 6 | 7 | 8 |
+---+---+---+---+
Note that the size of the highest dimension (2 rows) is not needed, and all values are one-indexed.
If each rack contains 20 nodes, use ``|row(\d+)-col(\d+)-node(\d+)|(dim2idx($1, 4, $2, 20, $3)`` to determine a node index (useful for determining IP addresses).

View File

@@ -1,25 +1,23 @@
.. _setup_localdisk_label:
Enabling the localdisk Option
Enabling the localdisk option
-----------------------------
``Note``: You can skip this section if not using the ``localdisk`` option in your litefile table.
Several things need to be done to enable the 'localdisk' support:
.. note:: You can skip this section if not using the ``localdisk`` option in your ``litefile`` table.
Define how to partition the local disk
``````````````````````````````````````
When a node is deployed, the local hard disk needs to be partitioned and formatted before it can be used. This section explains how provide a configuration file that tells xCAT to partition a local disk and make it ready to use for the directories listed in the litefile table with the ``localdisk`` option.
When a node is deployed, the local hard disk needs to be partitioned and formatted before it can be used. This section explains how provide a configuration file that tells xCAT to partition a local disk and make it ready to use for the directories listed in the litefile table.
The configuration file needs to be specified in the ``partitionfile`` attribute of the osimage definition. The configuration file includes several parts:
The configuration file needs to be specified in the ``partitionfile`` attribute of the osimage definition. The configuration file includes several sections:
* Global parameters to control enabling or disabling the function
* [disk] part to control the partitioning of the disk
* [localspace] part to control which partition will be used to store the localdisk directories listed in the litefile table
* [swapspace] part to control the enablement of the swap space for the node.
* ``[disk]`` section to control the partitioning of the disk
* ``[localspace]`` section to control which partition will be used to store the ``localdisk`` directories listed in the ``litefile`` table
* ``[swapspace]`` section to control the enablement of the swap space for the node.
An example localdisk configuration file: ::
An example ``localdisk`` configuration file: ::
enable=yes
enablepart=no
@@ -48,15 +46,15 @@ An example localdisk configuration file: ::
The two global parameters ``enable`` and ``enablepart`` can be used to control the enabling/disabling of the functions:
* enable: The localdisk feature only works when ``enable`` is set to *yes*. If it is set to *no*, the localdisk configuration will not be run.
* enable: The ``localdisk`` feature only works when ``enable`` is set to ``yes``. If it is set to ``no``, the ``localdisk`` configuration will not be run.
* enablepart: The partition action (refer to the ``[disk]`` section) will be run only when ``enablepart=yes``.
The ``[disk]`` section is used to configure how to partition a hard disk:
* dev: The path of the device file.
* clear: If set to ``yes`` it will clear all the existing partitions on this disk.
* fstype: The file system type for the new created partitions. ``ext3`` is the default value if not set.
* parts: A comma separated list of space ranges, one for each partition that will be created on the device. The valid format for each space range is ``<startpoint>-<endpoint>`` or ``<percentage of the disk>``. For example, you could set it to ``100M-10G`` or ``50``. If you set it to ``50``, that means 50% of the disk space will be assigned to that partition.
* fstype: The file system type for the new created partitions. ``ext3`` is the default.
* parts: A comma separated list of space ranges, one for each partition that will be created on the device. The valid format for each space range is ``<startpoint>-<endpoint>`` or ``<percentage of the disk>``. For example, you could set it to ``100M-10G`` or ``50``. If set to ``50``, 50% of the disk space will be assigned to that partition.
The ``[localspace]`` section is used to specify which partition will be used as local storage for the node.
@@ -67,21 +65,23 @@ the ``[swapspace]`` section is used to configure the swap space for the statelit
* dev: The path of the partition file which will be used as the swap space.
To enable the local disk capability, create the configuration file (for example in ``/install/custom``) and set the path in the partitionfile attribute for the osimage: ::
To enable the local disk capability, create the configuration file (for example in ``/install/custom``) and set the path in the ``partitionfile`` attribute for the osimage: ::
chdef -t osimage partitionfile=/install/custom/cfglocaldisk
chdef -t osimage <osimage> partitionfile=/install/custom/cfglocaldisk
Now all nodes that use this osimage (i.e. have their provmethod attribute set to this osimage definition name), will have its local disk configured.
Now all nodes that use this osimage (i.e. have their ``provmethod`` attribute set to this osimage definition name), will have its local disk configured.
Configure the files in the litefile table
`````````````````````````````````````````
For the files/directories that you would like xCAT to store on the local disk, add an entry in the litefile table like this: ::
For the files/directories to store on the local disk, add an entry in the ``litefile`` table: ::
"ALL","/tmp/","localdisk",,
``Note``: you do not need to specify the swap space in the litefile table. Just putting it in the partitionfile config file is enough.
.. note:: you do not need to specify the swap space in the litefile table. Just putting it in the ``partitionfile`` config file is enough.
Add an entry in policy table to permit the running of the ``getpartition`` command from the node ::
chtab priority=7.1 policy.commands=getpartition policy.rule=allow
Run ``genimage`` and ``packimage`` for the osimage

View File

@@ -15,11 +15,11 @@ The basic entry format looks like following: ::
path_of_src_file1 -> path_of_dst_directory
path_of_src_file1 path_of_src_file2 ... -> path_of_dst_directory
The path_of_src_file* should be the full path of the source file on the Management Node.
The ``path_of_src_file*`` should be the full path of the source file on the Management Node.
The path_of_dst_file* should be the full path of the destination file on target node. Please make sure path_of_dst_file* is not a existing directory on target node, otherwise, the file sync with ``updatenode -r /usr/bin/scp`` or ``xdcp -r /usr/bin/scp`` will fail.
The ``path_of_dst_file*`` should be the full path of the destination file on target node. Make sure ``path_of_dst_file*`` is not a existing directory on target node, otherwise, the file sync with ``updatenode -r /usr/bin/scp`` or ``xdcp -r /usr/bin/scp`` will fail.
The path_of_dst_directory should be the full path of the destination directory. Please make sure *eth_of_dst_directory is not a existing file on target node, otherwise, the file sync with ``updatenode -r /usr/bin/scp`` or ``xdcp -r /usr/bin/scp`` will fail.
The ``path_of_dst_directory`` should be the full path of the destination directory. Make ``sure path_of_dst_directory`` is not a existing file on target node, otherwise, the file sync with ``updatenode -r /usr/bin/scp`` or ``xdcp -r /usr/bin/scp`` will fail.
Since the synclist file is for common purpose, the target node need not be configured in it.
@@ -37,7 +37,7 @@ sync file **/etc/file4** to the file **/etc/tmp/file5** on the node( different f
/etc/file4 -> /etc/tmp/file5
sync the multiple files **/etc/file1**, **/etc/file2**, **/etc/file3**, ... to the directory **/tmp/etc** (**/tmp/etc** must be a directory when multiple files are synced at one time). If the directory does not exist,**xdcp** will create it. ::
sync the multiple files **/etc/file1**, **/etc/file2**, **/etc/file3**, ... to the directory **/tmp/etc** (**/tmp/etc** must be a directory when multiple files are synced at one time). If the directory does not exist, **xdcp** will create it. ::
/etc/file1 /etc/file2 /etc/file3 -> /tmp/etc
@@ -116,7 +116,8 @@ Advanced synclist file features
After you define the files to rsync in the syncfile, you can add an **EXECUTEALWAYS** clause in the syncfile. The **EXECUTEALWAYS** clause will list all the postscripts you would always like to run after the files are sync'd, whether or not any file is actually updated. The files in this list must be added to the list of files to rsync, if hierarchical.
For example, your rsyncfile may look like this. **Note: the path to the file to EXECUTE, is the location of the *.post file on the MN**. ::
For example, your rsyncfile may look like this.
.. note:: the path to the file to EXECUTE, is the location of the *.post file on the MN**. ::
/tmp/share/file2 -> /tmp/file2
@@ -160,7 +161,7 @@ When you use the **APPEND** clause, the file (left) of the arrow is appended to
The script creates a backup of the original file on the node in the directory defined by the site table nodesyncfiledir attribute, which is **/var/xcat/node/syncfiles** by default. To update the original file when using the function, you need to rsync a new original file to the node, removed the old original from the **/var/xcat/node/syncfiles/org** directory. If you want to cleanup all the files for the append function on the node, you can use the ``xdsh -c`` flag. See man page for ``xdsh``.
Note:no order of execution may be assumed by the order that the **EXECUTE,EXECUTEALWAYS and APPEND** clause fall in the synclist file.
.. note:: no order of execution may be assumed by the order that the **EXECUTE,EXECUTEALWAYS and APPEND** clause fall in the synclist file.
You can add an **MERGE** clause to your syncfile. This is only supported on Linux.
@@ -184,9 +185,9 @@ For example, your synclist file may look like this ::
/etc/mydir/mergeshadow -> /etc/shadow
/etc/mydir/mergegroup -> /etc/group
When you use the **MERGE** clause, the file (left) of the arrow is merged into the file right of the arrow. It will replace any common userid's found in those files and add new userids. The /opt/xcat/share/xcat/scripts/xdcpmerge.sh is used to accomplish this.
When you use the **MERGE** clause, the file (left) of the arrow is merged into the file right of the arrow. It will replace any common userid's found in those files and add new userids. The **/opt/xcat/share/xcat/scripts/xdcpmerge.sh** is used to accomplish this.
Note: no order of execution may be assumed by the order that the **EXECUTE,EXECUTEALWAYS,APPEND and MERGE** clause fall in the synclist file.
.. note:: no order of execution may be assumed by the order that the **EXECUTE,EXECUTEALWAYS,APPEND and MERGE** clause fall in the synclist file.
.. _the_localtion_of_synclist_file_for_updatenode_label:

View File

@@ -21,4 +21,4 @@ The RPM names below are only examples, substitute your specific level and archit
genitrd rhels7.3-ppc64le-install-compute --ignorekernelchk
nodeset <CN> osimage=rhels7.3-ppc64le-install-compute --noupdateinitrd
#. Boot CN from net normallly.
#. Boot CN from net normally.

View File

@@ -1,4 +1,3 @@
.. include:: ../../../common/deployment/enable_localdisk.rst
``Note``:
* To keep the contents on local disk after you using ``enablepart=yes`` to do the disk partitioning, make sure to set ``enablepart=no`` in partition configuration file after the node is booted.
.. note:: ``enablepart=yes`` in partition file will partition the local disk at every boot. If you want to preserve the contents on local disk at next boot, change to ``enablepart=no`` after the initial provision. A log file ``/.sllocal/log/localdisk.log`` on the target node can be used for debugging.

View File

@@ -30,8 +30,8 @@ chzone.1
The \ **chzone**\ command is designed to change the definition of a zone previous defined in the cluster.
The chzone command is only supported on Linux ( No AIX support).
The nodes are not updated with the new root ssh keys by chzone. You must run updatenode -k or xdsh -K to the nodes to update the root ssh keys to the new generated zone keys. This will also sync any service nodes with the zone keys, if you have a hierarchical cluster.
The \ **chzone**\ command is only supported on Linux ( No AIX support).
The nodes are not updated with the new root ssh keys by \ **chzone**\ . You must run \ **updatenode -k**\ or \ **xdsh -K**\ to the nodes to update the root ssh keys to the new generated zone keys. This will also sync any service nodes with the zone keys, if you have a hierarchical cluster.
Note: if any zones in the zone table, there must be one and only one defaultzone. Otherwise, errors will occur.
@@ -55,14 +55,14 @@ Note: if any zones in the zone table, there must be one and only one defaultzone
\ **-k | -**\ **-sshkeypath**\ \ *full path to the ssh RSA private key*\
This is the path to the id_rsa key that will be used to build new root's ssh keys for the zone. If -k is used, it will generate the ssh public key from the input ssh RSA private key, and store both in /etc/xcat/sshkeys/<zonename>/.ssh directory.
This is the path to the id_rsa key that will be used to build new root's ssh keys for the zone. If \ **-k**\ is used, it will generate the ssh public key from the input ssh RSA private key, and store both in /etc/xcat/sshkeys/<zonename>/.ssh directory.
\ **-K | -**\ **-genkeys**\
Using this flag, will generate new ssh RSA private and public keys for the zone into the /etc/xcat/sshkeys/<zonename>/.ssh directory.
The nodes are not automatically updated with the new root ssh keys by chzone. You must run updatenode -k or xdsh -K to the nodes to update the root ssh keys to the new generated zone keys. This will also sync any service nodes with the zone keys, if you have a hierarchical cluster.
The nodes are not automatically updated with the new root ssh keys by chzone. You must run \ **updatenode -k**\ or \ **xdsh -K**\ to the nodes to update the root ssh keys to the new generated zone keys. This will also sync any service nodes with the zone keys, if you have a hierarchical cluster.
@@ -79,7 +79,7 @@ Note: if any zones in the zone table, there must be one and only one defaultzone
\ **-a | -**\ **-addnoderange**\ \ *noderange*\
For each node in the noderange, it will set the zonename attribute for that node to the input zonename.
If the -g flag is also on the command, then
If the \ **-g**\ flag is also on the command, then
it will add the group name "zonename" to each node in the noderange.
@@ -88,14 +88,14 @@ Note: if any zones in the zone table, there must be one and only one defaultzone
For each node in the noderange, if the node is a member of the input zone, it will remove the zonename attribute for that node.
If any of the nodes in the noderange is not a member of the zone, you will get an error and nothing will be changed.
If the -g flag is also on the command, then
If the \ **-g**\ flag is also on the command, then
it will remove the group name "zonename" from each node in the noderange.
\ **-s| -**\ **-sshbetweennodes**\ \ **yes|no**\
If -s entered, the zone sshbetweennodes attribute will be set to yes or no based on the input. When this is set to yes, then ssh will be setup to allow passwordless root access between nodes. If no, then root will be prompted for a password when running ssh between the nodes in the zone.
If \ **-s**\ entered, the zone sshbetweennodes attribute will be set to yes or no based on the input. When this is set to yes, then ssh will be setup to allow passwordless root access between nodes. If no, then root will be prompted for a password when running ssh between the nodes in the zone.
@@ -168,9 +168,7 @@ Note: if any zones in the zone table, there must be one and only one defaultzone
5.
To remove a group of nodes (compute4) from zone4 and remove zone4 group from the nodes, enter:
5. To remove a group of nodes (compute4) from zone4 and remove zone4 group from the nodes, enter:
.. code-block:: perl
@@ -208,5 +206,5 @@ Location of the chzone command.
****************
L <mkzone(1)|mkzone.1>,L <rmzone(1)|rmzone.1>,L <xdsh(1)|xdsh.1>, updatenode(1)|updatenode.1
mkzone(1)|mkzone.1, rmzone(1)|rmzone.1, xdsh(1)|xdsh.1, updatenode(1)|updatenode.1

View File

@@ -175,9 +175,7 @@ EXAMPLES
5.
To setup the DB2 database but not start xcat running with it:
5. To setup the DB2 database but not start xcat running with it:
.. code-block:: perl

View File

@@ -112,9 +112,7 @@ EXAMPLES
2.
To display the tree of service node hierarchy for service node "mysn01".
2. To display the tree of service node hierarchy for service node "mysn01".
.. code-block:: perl
@@ -135,9 +133,7 @@ EXAMPLES
3.
To display the tree of hardware hierarchy for all the nodes.
3. To display the tree of hardware hierarchy for all the nodes.
.. code-block:: perl

View File

@@ -247,9 +247,7 @@ EXAMPLES
4.
Initialize an xCAT node called "node02" as an AIX diskless node. Create a new NIM machine definition name with the osimage as an extension to the xCAT node name.
4. Initialize an xCAT node called "node02" as an AIX diskless node. Create a new NIM machine definition name with the osimage as an extension to the xCAT node name.
.. code-block:: perl

View File

@@ -132,9 +132,7 @@ Note: if any zones in the zone table, there must be one and only one defaultzone
3.
To make a new zone2A using the ssh id_rsa private key in /root/.ssh:
3. To make a new zone2A using the ssh id_rsa private key in /root/.ssh:
.. code-block:: perl
@@ -144,9 +142,7 @@ Note: if any zones in the zone table, there must be one and only one defaultzone
4.
To make a new zone3 and assign the noderange compute3 to the zone enter:
4. To make a new zone3 and assign the noderange compute3 to the zone enter:
.. code-block:: perl
@@ -166,9 +162,7 @@ Note: if any zones in the zone table, there must be one and only one defaultzone
6.
To make a new zone5 and assign the noderange compute5 to the zone and add zone5 as a group to each node but not allow passwordless ssh between the nodes enter:
6. To make a new zone5 and assign the noderange compute5 to the zone and add zone5 as a group to each node but not allow passwordless ssh between the nodes enter:
.. code-block:: perl

View File

@@ -29,7 +29,7 @@ DESCRIPTION
***********
The \ **nodechmac**\ command changes the MAC address for provisioned nodes network interface.
The \ **nodechmac**\ command changes the MAC address for provisioned node's network interface.
You can use this command to keep an existing node configuration. For example, if an existing node has hardware problems, the replacement node can use the old configurations. By using the nodechmac command, the node name and network settings of the old node can be used by the new node.

View File

@@ -11,7 +11,7 @@ SYNOPSIS
********
\ **piflash**\ <noderange> -**\ **-package <filename>
\ **piflash**\ \ *noderange*\ -**\ **-package \ *filename*\
***********

View File

@@ -832,9 +832,7 @@ so no additional plugins are needed for BladeCenter.)
4
Query all the attirbutes for management module node MM1. (For chassis)
4 Query all the attributes for management module node MM1. (For chassis)
.. code-block:: perl

View File

@@ -384,9 +384,7 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, number of CPUs, amou
3.
To retrieve 'config' information from the HMC-managed LPAR node3, enter:
3. To retrieve 'config' information from the HMC-managed LPAR node3, enter:
.. code-block:: perl
@@ -406,9 +404,7 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, number of CPUs, amou
4.
To retrieve information about a VMware node vm1, enter:
4. To retrieve information about a VMware node vm1, enter:
.. code-block:: perl
@@ -433,9 +429,7 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, number of CPUs, amou
5.
To list the defined network names available for a given node:
5. To list the defined network names available for a given node:
.. code-block:: perl
@@ -460,9 +454,7 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, number of CPUs, amou
6.
To list the configuration for a given network:
6. To list the configuration for a given network:
.. code-block:: perl
@@ -483,9 +475,7 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, number of CPUs, amou
7.
To list the disk pool names available:
7. To list the disk pool names available:
.. code-block:: perl
@@ -505,9 +495,7 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, number of CPUs, amou
8.
List the configuration for a given disk pool:
8. List the configuration for a given disk pool:
.. code-block:: perl
@@ -527,9 +515,7 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, number of CPUs, amou
9.
List the known zFCP pool names.
9. List the known zFCP pool names.
.. code-block:: perl
@@ -549,9 +535,7 @@ Calling \ **rinv**\ for VMware will display the UUID/GUID, number of CPUs, amou
10.
List the SCSI/FCP devices contained in a given zFCP pool:
10. List the SCSI/FCP devices contained in a given zFCP pool:
.. code-block:: perl

View File

@@ -91,9 +91,7 @@ Note: if any zones in the zone table, there must be one and only one defaultzone
2.
To remove zone2 from the zone table, the zone2 zonename attribute, and the zone2 group assigned to all nodes that were in zone2, enter:
2. To remove zone2 from the zone table, the zone2 zonename attribute, and the zone2 group assigned to all nodes that were in zone2, enter:
.. code-block:: perl
@@ -103,9 +101,7 @@ Note: if any zones in the zone table, there must be one and only one defaultzone
3.
To remove zone3 from the zone table, all the node zone attributes and override the fact it is the defaultzone, enter:
3. To remove zone3 from the zone table, all the node zone attributes and override the fact it is the defaultzone, enter:
.. code-block:: perl
@@ -127,5 +123,5 @@ Location of the rmzone command.
****************
L <mkzone(1)|mkzone.1>,L <chzone(1)|chzone.1>,L <xdsh(1)|xdsh.1>, updatenode(1)|updatenode.1
mkzone(1)|mkzone.1, chzone(1)|chzone.1, xdsh(1)|xdsh.1, updatenode(1)|updatenode.1

View File

@@ -320,7 +320,7 @@ OPTIONS
\ **community**\ ={\ **public**\ | \ *string*\ }
Get or set the SNMP commmunity value. The default is \ **public**\ .
Get or set the SNMP community value. The default is \ **public**\ .
@@ -446,7 +446,7 @@ OPTIONS
\ **vlan**\
Get or set vlan ID. For get vlan ID, if vlan is not enabled, 'BMC VLAN disabled' will be outputed. For set vlan ID, the valid value are [1-4096].
Get or set vlan ID. For get vlan ID, if vlan is not enabled, 'BMC VLAN disabled' will be displayed. For set vlan ID, the valid value are [1-4096].
@@ -1250,9 +1250,7 @@ EXAMPLES
27.
To deconfigure memory bank 9 and 10 of Processing Unit 0 on mm01:
27. To deconfigure memory bank 9 and 10 of Processing Unit 0 on mm01:
.. code-block:: perl
@@ -1270,9 +1268,7 @@ EXAMPLES
28.
To reset the network interface of the specified nodes:
28. To reset the network interface of the specified nodes:
.. code-block:: perl

View File

@@ -163,7 +163,7 @@ Command Protocol can be used. See man \ **xdsh**\ for more details.
for Ethernet switches and IB switches under
\ */opt/xcat/share/xcat/devicetype*\ directory. If you want to overwrite
any of the configuration files, copy them to \ */var/opt/xcat/*\
directory and cutomize.
directory and customize.
For example, \ *base/IBSwitch/Qlogic/config*\ is the configuration
file location if devicetype is specified as IBSwitch::Qlogic.
xCAT will first search config file using \ */var/opt/xcat/*\ as the base.
@@ -329,9 +329,7 @@ using the exact match option, generating no additional templates, enter:
8.
To execute \ **sinv**\ on the AIX NIM 611dskls spot and compare /etc/hosts to compute1 node, run the following:
8. To execute \ **sinv**\ on the AIX NIM 611dskls spot and compare /etc/hosts to compute1 node, run the following:
.. code-block:: perl
@@ -343,9 +341,7 @@ using the exact match option, generating no additional templates, enter:
9.
To execute \ **sinv**\ on the device mswitch2 and compare to mswitch1
9. To execute \ **sinv**\ on the device mswitch2 and compare to mswitch1
.. code-block:: perl
@@ -367,5 +363,5 @@ Location of the sinv command.
****************
L <xdsh(1)|xdsh.1>, noderange(3)|noderange.3
xdsh(1)|xdsh.1, noderange(3)|noderange.3

View File

@@ -47,7 +47,7 @@ If the Management Node is target node, it must be defined in the xCAT database w
\ **REMOTE**\ \ **USER**\ :
A user_ID can be specified for the remote copy command. Remote user
specification is identical for the xdcp and xdsh commands. See the xdsh
specification is identical for the \ **xdcp**\ and \ **xdsh**\ commands. See the \ **xdsh**\
command for more information.
\ **REMOTE**\ \ **COMMAND**\ \ **COPY**\ :
@@ -234,7 +234,7 @@ standard output or standard error is displayed.
When you use the append script, the file (left) of the arrow is appended to the file right of the arrow. In this example, /etc/myappenddir/appendfile is appended to /etc/mysetup/setup file, which must already exist on the node. The /opt/xcat/share/xcat/scripts/xdcpappend.sh is used to accomplish this.
Another option is the \ **MERGE:**\ clause in the synclist file. The \ **MERGE:**\ clause is used to append the contents of the input file to /etc/passwd, /etc/group, or /etc/shadow on a Linux node. It is only supported for those files and only on Linux. You must not use both the APPEND and MERGE funcion for these three files. The processing could end up not creating the file you desire. The MERGE function is the preferred method, becuase APPEND only adds to the file. MERGE will add to the file but also insure there are no duplicate entries.
Another option is the \ **MERGE:**\ clause in the synclist file. The \ **MERGE:**\ clause is used to append the contents of the input file to /etc/passwd, /etc/group, or /etc/shadow on a Linux node. It is only supported for those files and only on Linux. You must not use both the APPEND and MERGE funcion for these three files. The processing could end up not creating the file you desire. The MERGE function is the preferred method, because APPEND only adds to the file. MERGE will add to the file but also insure there are no duplicate entries.
For example, your rsynclist file may look like this:
/tmp/share/file2 -> /tmp/file2
@@ -605,9 +605,7 @@ from the local host to node1 in the cluster, enter:
9.
To rsync all the files in /home/mikev to the compute nodes:
9. To rsync all the files in /home/mikev to the compute nodes:
Create a rsync file /tmp/myrsync, with this line:

View File

@@ -87,8 +87,7 @@ The remote shell is determined as follows, in order of precedence:
4. The \ **/usr/bin/ssh**\ command.
The remote shell options are determined as follows, in order of prece-
dence:
The remote shell options are determined as follows, in order of precedence:
1. The \ **-o**\ flag.
@@ -664,11 +663,9 @@ on the service node fedora9 diskless image, enter:
9.
To define the QLogic IB switch as a node and to set up the SSH keys for IB switch
\ **qswitch**\ with device configuration file
\ **/var/opt/xcat/IBSwitch/Qlogic/config**\ and user name \ **username**\ , enter
9. To define the QLogic IB switch as a node and to set up the SSH keys for IB switch
\ **qswitch**\ with device configuration file
\ **/var/opt/xcat/IBSwitch/Qlogic/config**\ and user name \ **username**\ , enter
.. code-block:: perl
@@ -703,9 +700,7 @@ on the service node fedora9 diskless image, enter:
12.
To define a BNT Ethernet switch as a node and run a command to create a new vlan with vlan id 3 on the switch.
12. To define a BNT Ethernet switch as a node and run a command to create a new vlan with vlan id 3 on the switch.
.. code-block:: perl
@@ -727,9 +722,7 @@ on the service node fedora9 diskless image, enter:
13.
To run \ **xdsh**\ with the non-root userid "user1" that has been setup as an xCAT userid and with sudo on node1 and node2 to run as root, do the following, see xCAT doc on Granting_Users_xCAT_privileges:
13. To run \ **xdsh**\ with the non-root userid "user1" that has been setup as an xCAT userid and with sudo on node1 and node2 to run as root, do the following, see xCAT doc on Granting_Users_xCAT_privileges:
.. code-block:: perl

View File

@@ -166,10 +166,8 @@ the format used in the Description, enter:
2.
To display the results of a command issued on several nodes with
identical output displayed only once, enter:
2. To display the results of a command issued on several nodes with
identical output displayed only once, enter:
.. code-block:: perl

View File

@@ -44,7 +44,7 @@ nodegroup Attributes:
\ **grouptype**\
The only current valid value is dynamic. We will be looking at having the object def commands working with static group definitions in the nodelist table.
Static or Dynamic. A static group is defined to contain a specific set of cluster nodes. A dynamic node group is one that has its members determined by specifying a selection criteria for node attributes.

View File

@@ -457,7 +457,7 @@ site Attributes:
xcatlport: The port used by xcatd command log writer process to collect command output.
xcatsslversion: The ssl version by xcatd. Default is SSLv3.
xcatsslversion: The ssl version by xcatd. Default is TLSv1.
xcatsslciphers: The ssl cipher by xcatd. Default is 3DES.

View File

@@ -331,7 +331,7 @@ group Attributes:
\ **grouptype**\ (nodegroup.grouptype)
The only current valid value is dynamic. We will be looking at having the object def commands working with static group definitions in the nodelist table.
Static or Dynamic. A static group is defined to contain a specific set of cluster nodes. A dynamic node group is one that has its members determined by specifying a selection criteria for node attributes.

View File

@@ -1,7 +1,7 @@
Extreme Cloud/Cluster Administration Toolkit
============================================
Extreme Cloud Administration Toolkit
====================================
xCAT stands for E\ **x**\ treme **C**\ loud/\ **C**\ luster **A**\ dministration **T**\ oolkit.
xCAT stands for E\ **x**\ treme **C**\ loud **A**\ dministration **T**\ oolkit.
xCAT offers complete management of clouds, clusters, HPC, grids, datacenters,
renderfarms, online gaming infrastructure, and whatever tomorrows next buzzword may be.

View File

@@ -14,13 +14,19 @@ xCAT 2.14.x
|xCAT |New OS |New |New Feature |
|Version | |Hardware | |
+=================================+===============+=============+==================================+
|| xCAT 2.14.4 2018/10/19 |- UBT18.04.1 | | |
|| | | | |
| `2.14.4 Release Notes <https:// | | | |
| github.com/xcat2/xcat-core/wiki | | | |
| /XCAT_2.14.4_Release_Notes>`_ | | | |
+---------------------------------+---------------+-------------+----------------------------------+
|| xCAT 2.14.3 2018/8/24 |- SLES12.3 | |- OpenBMC rflash enhancement |
|| | | | |
| `2.14.3 Release Notes <https:// | | |- xcat-inventory 0.1.5 |
| github.com/xcat2/xcat-core/wiki | | | |
| /XCAT_2.14.3_Release_Notes>`_ | | |- goconserver v0.3.1 |
+---------------------------------+---------------+-------------+----------------------------------+
|| xCAT 2.14.2 2018/7/13 |- Ubuntu 18.04 | | |
|| xCAT 2.14.2 2018/7/13 |- UBT 18.04 | | |
|| |- RHEL6.10 | | |
| `2.14.2 Release Notes <https:// | | | |
| github.com/xcat2/xcat-core/wiki | | | |

View File

@@ -1,117 +1,132 @@
Using RPM (recommended)
=======================
**Support is only for RHEL 7.5 for Power LE (Power 9)**
.. note:: Supported only on RHEL 7.5 for POWER9
If you want to install ``xCAT-openbmc-py`` on SN manually, please accoring **install xCAT-openbmc-py on MN** part. But if you hope xCAT could install it automatically, please config as **Install xCAT-openbmc-py on SN** part.
.. note:: In a herarchical environment ``xCAT-openbmc-py`` must be installed on both Management and Service nodes. On Service node ``xCAT-openbmc-py`` can be installed directly by following instructions in **Install xCAT-openbmc-py on MN**, or ``xCAT-openbmc-py`` can be installed on Service node from Management node by following instructions in **Install xCAT-openbmc-py on SN from MN**
Install xCAT-openbmc-py on MN
-----------------------------
The following repositories should be configured on your Management Node (and Service Nodes).
The following repositories should be configured on your Management Node.
* RHEL 7.5 OS Repository
* RHEL 7.5 Extras Repository
* RHEL 7 EPEL Repo (https://fedoraproject.org/wiki/EPEL)
* Fedora28 Repo (for ``gevent``, ``greenlet``)
* RHEL 7.5 OS repository
* RHEL 7.5 Extras repository
* RHEL 7 EPEL repository (https://fedoraproject.org/wiki/EPEL)
* Fedora28 repository (for ``gevent`` and ``greenlet``)
#. Configure the MN/SN to the RHEL 7.5 OS Repo
#. Configure RHEL 7.5 OS repository
#. Configure the MN/SN to the RHEL 7.5 Extras Repo
#. Configure RHEL 7.5 Extras repository
#. Configure the MN/SN to the EPEL Repo (https://fedoraproject.org/wiki/EPEL) ::
#. Configure EPEL repository ::
yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
#. Create a local Fedora28 Repo and Configure the MN/SN to the FC28 Repo
#. Create a local Fedora28 repository and configure the MN to the FC28 Repo
Here's an example to configure the Fedora 28 repo at ``/install/repos/fc28``
Here's an example to configure the Fedora 28 repository at ``/install/repos/fc28``
#. Make the target repo directory on the MN: ::
#. Make the target repository directory on the MN: ::
mkdir -p /install/repos/fc28/ppc64le/Packages
#. Download the rpms from the Internet: ::
#. Download the rpms: ::
cd /install/repos/fc28/ppc64le/Packages
wget https://www.rpmfind.net/linux/fedora-secondary/releases/28/Everything/ppc64le/os/Packages/p/python2-gevent-1.2.2-2.fc28.ppc64le.rpm
wget https://www.rpmfind.net/linux/fedora-secondary/releases/28/Everything/ppc64le/os/Packages/p/python2-greenlet-0.4.13-2.fc28.ppc64le.rpm
#. Create a yum repo in that directory: ::
#. Create a repository in that directory: ::
cd /install/repos/fc28/ppc64le/
createrepo .
#. Install ``xCAT-openbmc-py`` using ``yum``: ::
#. Create a repo file ``/etc/yum.repos.d/fc28.repo`` and set its contents: ::
[fc28]
name=Fedora28 yum repository for gevent and greenlet
baseurl=file:///install/repos/fc28/ppc64le/
enabled=1
gpgcheck=0
#. Install ``xCAT-openbmc-py`` : ::
yum install xCAT-openbmc-py
**Note**: The install will fail if the dependencies cannot be met.
Install xCAT-openbmc-py on SN from MN
-------------------------------------
Install xCAT-openbmc-py on SN
-----------------------------
.. attention:: Instructions below assume Service node has access to the Internet. If not, a local EPEL repository would need to be configured on the Management node, similar to the RHEL Extras repository.
For all types of SN installation, need to create repo for ``gevent`` and ``greenlet`` and config ``otherpkglist`` of osimage on MN
#. Copy ``Packages`` directory containing ``gevent`` and ``greenlet`` rpms from ``/install/repos/fc28/ppc64le`` to the directory pointed to by ``otherpkgdir`` attribute of the osimage. ::
#. Create the repo at ``otherpkgdir`` path as the example above, could run ``lsdef -t osimage <os>-<arch>-<install|netboot>-service`` to get the path ::
# Display the directory of otherpkgdir
lsdef -t osimage rhels7.5-ppc64le-install-service -i otherpkgdir -c
# lsdef -t osimage rhels7.5-ppc64le-install-service | grep otherpkgdir
otherpkgdir=/install/post/otherpkgs/rhels7.5/ppc64le
# Create Packages directory
mkdir /install/post/otherpkgs/rhels7.5-alternate/ppc64le/xcat/Packages
#. Configure ``otherpkglist`` of the current osimage ::
# Copy rpms
cp /install/repos/fc28/ppc64le/Packages/*.rpm /install/post/otherpkgs/rhels7.5-alternate/ppc64le/xcat/Packages
# lsdef -t osimage rhels7.5-ppc64le-install-service | grep otherpkglist
otherpkglist=/opt/xcat/share/xcat/install/rh/service.rhels7.ppc64le.otherpkgs.pkglist
#. Configure ``otherpkglist`` attribute of the osimage ::
chdef -t osimage rhels7.5-ppc64le-install-service otherpkglist=/opt/xcat/share/xcat/install/rh/service.rhels7.ppc64le.otherpkgs.pkglist
#. Add the following entries to the contents of ``/opt/xcat/share/xcat/install/rh/service.rhels7.ppc64le.otherpkgs.pkglist`` ::
# cat /opt/xcat/share/xcat/install/rh/service.rhels7.ppc64le.otherpkgs.pkglist
...
xcat/Packages/python2-gevent
xcat/Packages/python2-greenlet
xcat/xcat-core/xCAT-openbmc-py
Install on diskful SN by updatenode
```````````````````````````````````
#. Choose one of the 3 methods below to complete the installation
If you have installed SN without ``xCAT-openbmc-py package``, could run updatenode to install it.
Install on diskful SN using updatenode
``````````````````````````````````````
#. Sync epel repo and key file ::
If SN was installed without ``xCAT-openbmc-py`` package, ``updatenode`` can be used to install that package.
# rsync -v /etc/yum.repos.d/epel.repo root@10.3.17.17:/etc/yum.repos.d/
# rsync -v /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 root@10.3.17.17:/etc/pki/rpm-gpg/
#. Sync EPEL repository and key file ::
rsync -v /etc/yum.repos.d/epel.repo root@<SN>:/etc/yum.repos.d/
rsync -v /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 root@<SN>:/etc/pki/rpm-gpg/
#. Update packages on SN ::
# updatenode service -S
updatenode <SN> -S
Install on diskful SN
`````````````````````
Install on diskful SN using rinstall
````````````````````````````````````
#. Configure ``synclists`` of osimage ::
#. Configure ``synclists`` attribute of osimage ::
# lsdef -t osimage rhels7.5-ppc64le-install-service | grep synclists
synclists=/install/custom/netboot/compute.synclist
chdef -t osimage rhels7.5-ppc64le-install-service synclists=/install/custom/netboot/compute.synclist
#. Add the following to the contents of ``/install/custom/netboot/compute.synclist`` ::
# cat /install/custom/netboot/compute.synclist
...
/etc/yum.repos.d/epel.repo -> /etc/yum.repos.d/epel.repo
/etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 -> /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
#. Install SN ::
# rinstall service osimage=rhels7.5-ppc64le-install-service
rinstall <SN> osimage=rhels7.5-ppc64le-install-service
Install on diskless SN
``````````````````````
Install on diskless SN using rinstall
`````````````````````````````````````
#. Add epel online repo https://dl.fedoraproject.org/pub/epel/7/ppc64le to ``pkgdir`` ::
#. Add EPEL online repository https://dl.fedoraproject.org/pub/epel/7/ppc64le to ``pkgdir`` attribute of osimage::
# lsdef -t osimage -o rhels7.5-ppc64le-netboot-service | grep pkgdir
pkgdir=/install/rhels7.5/ppc64le,https://dl.fedoraproject.org/pub/epel/7/ppc64le
chdef -t osimage -o rhels7.5-ppc64le-netboot-service -p pkgdir=https://dl.fedoraproject.org/pub/epel/7/ppc64le
#. Install SN ::
#. Install diskless SN ::
# genimage rhels7.5-ppc64le-netboot-service
# packimage rhels7.5-ppc64le-netboot-service
# rinstall service osimage=rhels7.5-ppc64le-netboot-service
genimage rhels7.5-ppc64le-netboot-service
packimage rhels7.5-ppc64le-netboot-service
rinstall <SN> osimage=rhels7.5-ppc64le-netboot-service

View File

@@ -1,5 +1,3 @@
xCAT - eXtreme Cloud Administration Toolkit
xCAT is a toolkit for the deployment and administration of clusters.
xCAT documentation is available at: http://xcat-docs.readthedocs.org/

View File

@@ -3441,7 +3441,7 @@ sub verify_targets
{
my $rsp = {};
$rsp->{error}->[0] =
"$user_target is not responding. No command will be issued to this host.";
"$user_target is not responding, make sure it is a node object and is defined in xCATdb. No command will be issued to this host.";
xCAT::MsgUtils->message("E", $rsp, $::CALLBACK);
# report error status --nodestatus

View File

@@ -151,7 +151,7 @@ sub remote_copy_command
or die "Can not open file $rsyncfile";
my $dest_dir_list = join ' ', keys %{ $$config{'destDir_srcFile'} };
my $dest_user_host = $$config{'dest-host'};
if ($$config{'dest-user'})
if (getpwnam($$config{'dest-user'}))
{
$dest_user_host =
"$$config{'dest-user'}@" . "$$config{'dest-host'}";

View File

@@ -150,7 +150,7 @@ sub remote_copy_command {
open SCPCMDFILE, "> $scpfile"
or die "Can not open file $scpfile";
if ($$config{'dest-user'})
if (getpwnam($$config{'dest-user'}))
{
$dest_user_host =
"$$config{'dest-user'}@" . "$$config{'dest-host'}";

View File

@@ -585,7 +585,7 @@ passed as argument rather than by table value',
table_desc => 'Contains group definitions, whose membership is dynamic depending on characteristics of the node.',
descriptions => {
groupname => 'Name of the group.',
grouptype => 'The only current valid value is dynamic. We will be looking at having the object def commands working with static group definitions in the nodelist table.',
grouptype => 'Static or Dynamic. A static group is defined to contain a specific set of cluster nodes. A dynamic node group is one that has its members determined by specifying a selection criteria for node attributes.',
members => 'The value of the attribute is not used, but the attribute is necessary as a place holder for the object def commands. (The membership for static groups is stored in the nodelist table.)',
membergroups => 'This attribute stores a comma-separated list of nodegroups that this nodegroup refers to. This attribute is only used by PCM.',
wherevals => 'A list of "attr*val" pairs that can be used to determine the members of a dynamic group, the delimiter is "::" and the operator * can be ==, =~, != or !~.',
@@ -1255,9 +1255,11 @@ passed as argument rather than by table value',
" FQDNfirst: Fully Qualified Domain Name first. If set to 1/yes/enable, the /etc/hosts \n" .
" entries generated by 'makehosts' will put the FQDN before the PQDN(Partially \n" .
" Qualified Domain Name). Otherwise, the original behavior will be performed.\n\n" .
" hierarchicalattrs: Table attributes(e.g. postscripts, postbootscripts) that will be\n" .
" included hierarchically. Attribute values for all the node's groups\n" .
" hierarchicalattrs: A comma delimited list of table attributes(e.g. postscripts, postbootscripts)\n" .
" that will be included hierarchically. Attribute values for all the node's groups\n" .
" will be applied to the node in the groups' order except the repeat one.\n" .
" By default, comma is used to combine the values. But some columns use different \n" .
" delimiter, to specify delimiter for those columns as format of 'column:delimiter'.\n\n" .
" dbtracelevel: The trace level for the database access log. To activate this setting, please. \n".
" restart xcatd or send HUP signal to the 'xcatd: DB Access' process, Like: .\n".
" ps -ef | grep 'xcatd: DB Access' | grep -v grep | awk '{print \$2}' | xargs kill -HUP \n".

View File

@@ -2324,12 +2324,8 @@ sub getNodesAttribs {
}
if (!exists($options{hierarchy_attrs})) {
my @hierarchy_attrs = ();
my $hierarchy_field = xCAT::TableUtils->get_site_attribute("hierarchicalattrs");
if ($hierarchy_field) {
@hierarchy_attrs = split(/,/, $hierarchy_field);
}
$options{hierarchy_attrs} = \@hierarchy_attrs;
my $hierarchy_field = xCAT::TableUtils->getHierarchyAttrs();
$options{hierarchy_attrs} = $hierarchy_field;
}
if ($dbworkerpid > 0) {
@@ -2359,12 +2355,6 @@ sub getNodesAttribs {
$self->{nodelist}->{_use_cache} = 1;
}
my $rethash;
#my @hierarchy_attrs = ();
#my $hierarchy_field = xCAT::TableUtils->get_site_attribute("hierarchicalattrs");
#if ($hierarchy_field) {
# @hierarchy_attrs = split(/,/, $hierarchy_field);
#}
#$options{hierarchy_attrs} = \@hierarchy_attrs;
foreach (@$nodelist) {
my @nodeentries = $self->getNodeAttribs($_, \@realattribs, %options);
$rethash->{$_} = \@nodeentries; #$self->getNodeAttribs($_,\@attribs);
@@ -2755,12 +2745,8 @@ sub getNodeAttribs
}
if (!exists($options{hierarchy_attrs})) {
my @hierarchy_attrs = ();
my $hierarchy_field = xCAT::TableUtils->get_site_attribute("hierarchicalattrs");
if ($hierarchy_field) {
@hierarchy_attrs = split(/,/, $hierarchy_field);
}
$options{hierarchy_attrs} = \@hierarchy_attrs;
my $hierarchy_field = xCAT::TableUtils->getHierarchyAttrs();
$options{hierarchy_attrs} = $hierarchy_field;
}
if ($dbworkerpid > 0) { #TODO: should this be moved outside of the DB worker entirely? I'm thinking so, but I don't dare do so right now...
#the benefit would be the potentially computationally intensive substitution logic would be moved out and less time inside limited
@@ -3005,14 +2991,15 @@ sub getNodeAttribs_nosub_returnany
my $attrib;
my $result;
my @hierarchy_attrs = @{ $options{hierarchy_attrs} };
my $hierarchy_attrs = $options{hierarchy_attrs};
my $data = $results[0];
if (defined {$data}) { #if there was some data for the node, loop through and check it
foreach $result (@results) {
foreach $attrib (keys %attribsToDo) {
if (defined($result) && defined($result->{$attrib}) && $self->{tabname} ne 'nodelist'
&& @hierarchy_attrs && grep (/^$attrib$/, @hierarchy_attrs)) {
$result->{$attrib} .= ',+=NEXTRECORD';
&& ref($hierarchy_attrs) eq "HASH" && exists $hierarchy_attrs->{$attrib}) {
$result->{$attrib} .= $hierarchy_attrs->{$attrib};
$result->{$attrib} .= '+=NEXTRECORD';
}
#check each item in the results to see which attributes were satisfied
@@ -3072,8 +3059,9 @@ sub getNodeAttribs_nosub_returnany
$attribsDone{$attrib} = 0;
# for hierarchy attribute, append attributes from all the node's group
if (@hierarchy_attrs && grep (/^$attrib$/, @hierarchy_attrs)) {
$groupResult->{$attrib} .= ',+=NEXTRECORD';
if (ref($hierarchy_attrs) eq "HASH" && exists $hierarchy_attrs->{$attrib}) {
$groupResult->{$attrib} .= $hierarchy_attrs->{$attrib};
$groupResult->{$attrib} .= '+=NEXTRECORD';
}
foreach $result (@results) { #loop through our existing results to add or modify the value for this attribute
if (defined($result)) {
@@ -3160,13 +3148,13 @@ sub getNodeAttribs_nosub_returnany
for $result (@results) {
for my $key (keys %$result) {
$result->{$key} =~ s/\+=NEXTRECORD//g;
if (@hierarchy_attrs && grep (/^$key$/, @hierarchy_attrs)) {
my @attribs = split(/,/, $result->{$key});
if (ref($hierarchy_attrs) eq "HASH" && exists $hierarchy_attrs->{$key}) {
my @attribs = split(/\Q$hierarchy_attrs->{$key}/, $result->{$key});
my %count;
# remove the repeat value
@attribs = grep { ++$count{$_} < 2; } @attribs;
$result->{$key} = join(',', @attribs);
$result->{$key} = join($hierarchy_attrs->{$key}, @attribs);
}
}
}
@@ -3430,11 +3418,7 @@ sub getAllNodeAttribs
$self->{_use_cache} = 1;
$self->{nodelist}->{_use_cache} = 1;
my @hierarchy_attrs = ();
my $hierarchy_field = xCAT::TableUtils->get_site_attribute("hierarchicalattrs");
if ($hierarchy_field) {
@hierarchy_attrs = split(/,/, $hierarchy_field);
}
my $hierarchy_field = xCAT::TableUtils->getHierarchyAttrs();
while (my $data = $query->fetchrow_hashref())
{
@@ -3462,7 +3446,7 @@ sub getAllNodeAttribs
# @hierarchy_attrs = split(/,/, $hierarchy_field);
#}
my %options = ();
$options{hierarchy_attrs} = \@hierarchy_attrs;
$options{hierarchy_attrs} = $hierarchy_field;
foreach (@nodes)
{
if ($donenodes{$_}) { next; }

View File

@@ -1533,6 +1533,51 @@ sub get_site_Master
}
#--------------------------------------------------------------------------------
=head3 getHierarchyAttrs
Get the global hierarchy attributes in site table, and parse the corresponding delimiter.
Arguments:
none
Returns:
Hash reference of hierarchy attributes defined at site.hierarchicalattrs.
{
'begin' => '|',
'postscripts' => ','
}
Globals:
none
Error:
none
Example:
$attrs = xCAT::TableUtils->getHierarchyAttrs();
Comments:
none
=cut
#--------------------------------------------------------------------------------
sub getHierarchyAttrs
{
my %hierarchy_attrs = ();
my $hierarchy_field = xCAT::TableUtils->get_site_attribute("hierarchicalattrs");
if ($hierarchy_field) {
# $hierarchy_field should be `attr1_name:delimiter1,attr2,attr3:delimiter2`
foreach my $item (split(/,/, $hierarchy_field)) {
$item = xCAT::Utils->strim($item);
next unless ($item);
my ($attr, $del) = split(/\s*:\s*/, $item, 2);
$del = ',' unless ($del);
$hierarchy_attrs{$attr} = $del if ($attr);
}
}
return \%hierarchy_attrs;
}
#-------------------------------------------------------------------------------
=head3 checkCredFiles

View File

@@ -96,6 +96,8 @@ our %distnames = (
"1446216863.790260" => "rhels7.2", #x86_64
"1446216863.764721" => "rhels7.2", #ppc64
"1446216863.771788" => "rhels7.2", #ppc64le
"1527763142.409770" => "rhels8.0", #ppc64le
"1537463022.540933" => "rhels8.0", #ppc64le
"1285193176.593806" => "rhelhpc6", #x86_64
"1305067719.718814" => "rhelhpc6.1", #x86_64

View File

@@ -9,7 +9,6 @@
use strict;
#use lib '.';
use Pod::Man;
use Pod::Html;
@@ -20,13 +19,6 @@ my $cachedir = '/tmp';
my @pods = getPodList($poddir);
#foreach (@pods) { print "$_\n"; } exit;
# Build the cmd overview page.
#writesummarypage("$poddir/man1/xcat.1.pod", @pods);
# Build the man page for each pod.
#mkdir($mandir) or die "Error: could not create $mandir.\n";
print "Converting PODs to man pages...\n";
foreach my $podfile (@pods) {
my $manfile = $podfile;
@@ -41,8 +33,6 @@ foreach my $podfile (@pods) {
my @dummyPods = createDummyPods($poddir, \@pods);
# Build the html page for each pod.
#mkdir($htmldir) or die "Error: could not create $htmldir.\n";
print "Converting PODs to HTML pages...\n";
# have to clear the cache, because old entries can cause a problem
@@ -55,7 +45,6 @@ foreach my $podfile (@pods) {
$hdir =~ s|/[^/]*$||; # get rid of the basename part
if (system("mkdir -p $hdir")) { die "Error: could not create $hdir.\n"; }
#print "$podfile, $htmlfile, $poddir, $htmldir\n";
convertpod2html($podfile, $htmlfile, $poddir, $htmldir);
}
@@ -74,16 +63,12 @@ sub createDummyPods {
my ($poddir, $pods) = @_;
my $cmd = "grep -r -E 'L<.+\\([57]\\)\\|.+\\.[57]>' " . $poddir;
#print "Running cmd: ", $cmd, "\n";
my @lines = `$cmd`;
if ($?) { print "Did not find any section 5 man page, creating dummy pods...\n"; print join('', @lines); }
#my @lines;
#system($cmd);
my @dummyPods;
foreach my $l (@lines) {
#print "$l\n";
my @matches = $l =~ /L<([^\(]+)\(([57])\)\|\1\.[57]>/g; # get all the matches in the line
# The above line should create the array with every other entry being the man page name
# and every other entry is the section # (5 or 7)
@@ -134,71 +119,6 @@ sub getPodList {
}
# Create the xcat man page that gives a summary description of each xcat cmd.
# Not used
sub writesummarypage {
my $file = shift; # relative path file name of the man page
# the rest of @_ contains the pod files that describe each cmd
open(FILE, ">$file") or die "Error: could not open $file for writing.\n";
print FILE <<'EOS1';
=head1 NAME
B<xcat> - extreme Cluster Administration Tool.
=head1 DESCRIPTION
Extreme Cluster Administration Toolkit (xCAT). xCAT is a scalable distributed computing management
and provisioning tool that provides a unified interface for hardware control, discovery, and
OS diskful/diskfree deployment.
=head1 XCAT DATABASE
All of the cluster configuration information is in the xCAT database. See L<xcatdb(5)|xcatdb.5> for
descriptions of every table in the database.
=head1 XCAT COMMANDS
What follows is a short description of each xCAT command. To get more information about a particular
command, see its man page. Note that the commands are listed in alphabetical order B<within each section>,
i.e. all the commands in section 1, then the commands in section 3, etc.
=over 12
EOS1
# extract the summary for each cmd from its man page
foreach my $manpage (@_) {
my ($sectionnum) = $manpage =~ /\.(\d+)\.pod$/;
# Suck in the whole file, then we will parse it.
open(MANPAGE, "$manpage") or die "Error: could not open $manpage for reading.\n";
my @contents = <MANPAGE>;
my $wholemanpage = join('', @contents);
close(MANPAGE);
# This regex matches: optional space, =head1, space, title, space, cmd, space, description, newline
my ($cmd, $description) = $wholemanpage =~ /^\s*=head1\s+\S+\s+(\S+)\s+(.+?)\n/si;
if (!defined($cmd)) { print "Warning: $manpage is not in a recognized structure. It will be ignored.\n"; next; }
if (!defined($description)) { print "Warning: $manpage does not have a description for $cmd. It will be ignored.\n"; next; }
$cmd =~ s/^.<(.+)>$/$1/; # if the cmd name has pod formatting around it, strip it off
$description =~ s/^-\s*//; # if the description has a leading hypen, strip it off
print FILE "\n=item L<$cmd($sectionnum)|$cmd.$sectionnum>\n\n" . $description . "\n";
}
# Artificially add the xcattest cmd, because the xCAT-test rpm will add this
print FILE "\n=item L<xcattest(1)|xcattest.1>\n\nRun automated xCAT test cases.\n";
print FILE <<"EOS3";
=back
EOS3
close FILE;
}
# Create the html page for one pod.
sub convertpod2html {
my ($podfile, $htmlfile, $poddir, $htmldir) = @_;

View File

@@ -9,7 +9,6 @@
use strict;
#use lib '.';
use Pod::Man;
use Pod::Html;
@@ -21,13 +20,6 @@ if (system("mkdir -p $cachedir")) { die "Error: could not create $cachedir.\n";
my @pods = getPodList($poddir);
#foreach (@pods) { print "$_\n"; } exit;
# Build the cmd overview page.
#writesummarypage("$poddir/man1/xcat.1.pod", @pods);
# Build the man page for each pod.
#mkdir($mandir) or die "Error: could not create $mandir.\n";
print "Converting PODs to man pages...\n";
foreach my $podfile (@pods) {
my $manfile = $podfile;
@@ -42,8 +34,6 @@ foreach my $podfile (@pods) {
my @dummyPods = createDummyPods($poddir, \@pods);
# Build the html page for each pod.
#mkdir($htmldir) or die "Error: could not create $htmldir.\n";
print "Converting PODs to HTML pages...\n";
# have to clear the cache, because old entries can cause a problem
@@ -56,7 +46,6 @@ foreach my $podfile (@pods) {
$hdir =~ s|/[^/]*$||; # get rid of the basename part
if (system("mkdir -p $hdir")) { die "Error: could not create $hdir.\n"; }
#print "$podfile, $htmlfile, $poddir, $htmldir\n";
convertpod2html($podfile, $htmlfile, $poddir, $htmldir);
}
@@ -75,16 +64,12 @@ sub createDummyPods {
my ($poddir, $pods) = @_;
my $cmd = "grep -r -E 'L<.+\\([57]\\)\\|.+\\.[57]>' " . $poddir;
#print "Running cmd: ", $cmd, "\n";
my @lines = `$cmd`;
if ($?) { print "Did not find any section 5 man page, creating dummy pods...\n"; print join('', @lines); }
#my @lines;
#system($cmd);
my @dummyPods;
foreach my $l (@lines) {
#print "$l\n";
my @matches = $l =~ /L<([^\(]+)\(([57])\)\|\1\.[57]>/g; # get all the matches in the line
# The above line should create the array with every other entry being the man page name
# and every other entry is the section # (5 or 7)
@@ -134,72 +119,6 @@ sub getPodList {
return sort @files;
}
# Create the xcat man page that gives a summary description of each xcat cmd.
# Not used.
sub writesummarypage {
my $file = shift; # relative path file name of the man page
# the rest of @_ contains the pod files that describe each cmd
open(FILE, ">$file") or die "Error: could not open $file for writing.\n";
print FILE <<'EOS1';
=head1 NAME
B<xcat> - extreme Cluster Administration Tool.
=head1 DESCRIPTION
Extreme Cluster Administration Toolkit (xCAT). xCAT is a scalable distributed computing management
and provisioning tool that provides a unified interface for hardware control, discovery, and
OS diskful/diskfree deployment.
=head1 XCAT DATABASE
All of the cluster configuration information is in the xCAT database. See L<xcatdb(5)|xcatdb.5> for
descriptions of every table in the database.
=head1 XCAT COMMANDS
What follows is a short description of each xCAT command. To get more information about a particular
command, see its man page. Note that the commands are listed in alphabetical order B<within each section>,
i.e. all the commands in section 1, then the commands in section 3, etc.
=over 12
EOS1
# extract the summary for each cmd from its man page
foreach my $manpage (@_) {
my ($sectionnum) = $manpage =~ /\.(\d+)\.pod$/;
# Suck in the whole file, then we will parse it.
open(MANPAGE, "$manpage") or die "Error: could not open $manpage for reading.\n";
my @contents = <MANPAGE>;
my $wholemanpage = join('', @contents);
close(MANPAGE);
# This regex matches: optional space, =head1, space, title, space, cmd, space, description, newline
my ($cmd, $description) = $wholemanpage =~ /^\s*=head1\s+\S+\s+(\S+)\s+(.+?)\n/si;
if (!defined($cmd)) { print "Warning: $manpage is not in a recognized structure. It will be ignored.\n"; next; }
if (!defined($description)) { print "Warning: $manpage does not have a description for $cmd. It will be ignored.\n"; next; }
$cmd =~ s/^.<(.+)>$/$1/; # if the cmd name has pod formatting around it, strip it off
$description =~ s/^-\s*//; # if the description has a leading hypen, strip it off
print FILE "\n=item L<$cmd($sectionnum)|$cmd.$sectionnum>\n\n" . $description . "\n";
}
# Artificially add the xcattest cmd, because the xCAT-test rpm will add this
print FILE "\n=item L<xcattest(1)|xcattest.1>\n\nRun automated xCAT test cases.\n";
print FILE <<"EOS3";
=back
EOS3
close FILE;
}
# Create the html page for one pod.
sub convertpod2html {
my ($podfile, $htmlfile, $poddir, $htmldir) = @_;

View File

@@ -9,5 +9,4 @@ Package: xcat-ui
Architecture: all
Depends: ${perl:Depends}, python (>= 2.3), apache2, libapache2-mod-php
Description: Provides a browser-based interface for xCAT
Provides a browser-based interface for xCAT (extreme Cluster Administration
Tool).
Provides a browser-based interface for xCAT.

View File

@@ -72,7 +72,7 @@ function login() {
<div id="login-form">
<table>
<tr><td colspan=5></td></tr>
<tr><td align=right><img src="images/logo.png" width="50" height="35"></img></td><td colspan=4 style="font-size: 18px;">eXtreme Cloud Administration Toolkit</td></tr>
<tr><td align=right><img src="images/logo.png" width="50" height="35"></img></td><td colspan=4 style="font-size: 18px;">xCAT</td></tr>
<tr><td colspan=5></td></tr>
<tr><td></td><td><label for=username>User name:</label></td><td colspan=2><input type=text name=username></td><td></td></tr>
<tr><td></td><td><label for=password>Password:</label></td><td colspan=2><input type=password name=password></td><td></td></tr>

View File

@@ -57,7 +57,7 @@ if (!isAuthenticated()) {
<div id="login-form">
<table>
<tr><td colspan=5></td></tr>
<tr><td align=right><img src="images/logo.png" width="50" height="35"></img></td><td colspan=4 style="font-size: 18px;">eXtreme Cloud Administration Toolkit</td></tr>
<tr><td align=right><img src="images/logo.png" width="50" height="35"></img></td><td colspan=4 style="font-size: 18px;">xCAT</td></tr>
<tr><td colspan=5></td></tr>
<tr><td></td><td><label for=username>User name:</label></td><td colspan=2><input type=text name=username></td><td></td></tr>
<tr><td></td><td><label for=password>Password:</label></td><td colspan=2><input type=password name=password></td><td></td></tr>

View File

@@ -31,7 +31,7 @@ Requires: httpd
%endif
%description
Provides a browser-based interface for xCAT (Extreme Cloud Administration Toolkit).
Provides a browser-based interface for xCAT.
%prep
%setup -q -n xCAT-UI

View File

@@ -199,9 +199,10 @@ sub nmap_pping {
@nmap_options = xCAT::TableUtils->get_site_attribute("nmapoptions");
$more_options = $nmap_options[0];
}
open(FPING, "nmap -PE --system-dns --unprivileged --send-ip -sP $more_options " . join(' ', @$nodes) . " 2> /dev/null|") or die("Cannot open nmap pipe: $!");
# Added port 22 for unprivileged case (#4324)
open(NMAP, "nmap -PE --system-dns --send-ip -sP --unprivileged -PA80,443,22 $more_options " . join(' ', @$nodes) . " 2> /dev/null|") or die("Cannot open nmap pipe: $!");
my $node;
while (<FPING>) {
while (<NMAP>) {
if (/Host (.*) \(.*\) appears to be up/) {
$node = $1;
unless ($deadnodes{$node}) {

View File

@@ -12,8 +12,8 @@ B<chzone> [B<-h> | B<-v>]
=head1 B<DESCRIPTION>
The B<chzone> command is designed to change the definition of a zone previous defined in the cluster.
The chzone command is only supported on Linux ( No AIX support).
The nodes are not updated with the new root ssh keys by chzone. You must run updatenode -k or xdsh -K to the nodes to update the root ssh keys to the new generated zone keys. This will also sync any service nodes with the zone keys, if you have a hierarchical cluster.
The B<chzone> command is only supported on Linux ( No AIX support).
The nodes are not updated with the new root ssh keys by B<chzone>. You must run B<updatenode -k> or B<xdsh -K> to the nodes to update the root ssh keys to the new generated zone keys. This will also sync any service nodes with the zone keys, if you have a hierarchical cluster.
Note: if any zones in the zone table, there must be one and only one defaultzone. Otherwise, errors will occur.
=head1 B<OPTIONS>
@@ -30,12 +30,12 @@ Displays command version and build date.
=item B<-k | --sshkeypath> I<full path to the ssh RSA private key>
This is the path to the id_rsa key that will be used to build new root's ssh keys for the zone. If -k is used, it will generate the ssh public key from the input ssh RSA private key, and store both in /etc/xcat/sshkeys/<zonename>/.ssh directory.
This is the path to the id_rsa key that will be used to build new root's ssh keys for the zone. If B<-k> is used, it will generate the ssh public key from the input ssh RSA private key, and store both in /etc/xcat/sshkeys/<zonename>/.ssh directory.
=item B<-K | --genkeys>
Using this flag, will generate new ssh RSA private and public keys for the zone into the /etc/xcat/sshkeys/<zonename>/.ssh directory.
The nodes are not automatically updated with the new root ssh keys by chzone. You must run updatenode -k or xdsh -K to the nodes to update the root ssh keys to the new generated zone keys. This will also sync any service nodes with the zone keys, if you have a hierarchical cluster.
The nodes are not automatically updated with the new root ssh keys by chzone. You must run B<updatenode -k> or B<xdsh -K> to the nodes to update the root ssh keys to the new generated zone keys. This will also sync any service nodes with the zone keys, if you have a hierarchical cluster.
=item B<--defaultzone>
@@ -48,20 +48,20 @@ Note: if any zones in the zone table, there must be one and only one defaultzone
=item B<-a | --addnoderange> I<noderange>
For each node in the noderange, it will set the zonename attribute for that node to the input zonename.
If the -g flag is also on the command, then
If the B<-g> flag is also on the command, then
it will add the group name "zonename" to each node in the noderange.
=item B<-r | --rmnoderange> I<noderange>
For each node in the noderange, if the node is a member of the input zone, it will remove the zonename attribute for that node.
If any of the nodes in the noderange is not a member of the zone, you will get an error and nothing will be changed.
If the -g flag is also on the command, then
If the B<-g> flag is also on the command, then
it will remove the group name "zonename" from each node in the noderange.
=item B<-s| --sshbetweennodes> B<yes|no>
If -s entered, the zone sshbetweennodes attribute will be set to yes or no based on the input. When this is set to yes, then ssh will be setup to allow passwordless root access between nodes. If no, then root will be prompted for a password when running ssh between the nodes in the zone.
If B<-s> entered, the zone sshbetweennodes attribute will be set to yes or no based on the input. When this is set to yes, then ssh will be setup to allow passwordless root access between nodes. If no, then root will be prompted for a password when running ssh between the nodes in the zone.
=item B<-f | --force>
@@ -108,7 +108,6 @@ To add a new group of nodes (compute3) to zone3 and add zone3 group to the nodes
chzone zone3 -a compute3 -g
=item 5.
To remove a group of nodes (compute4) from zone4 and remove zone4 group from the nodes, enter:
chzone> zone4 -r compute4 -g
@@ -130,4 +129,4 @@ Location of the chzone command.
=head1 B<SEE ALSO>
L <mkzone(1)|mkzone.1>,L <rmzone(1)|rmzone.1>,L <xdsh(1)|xdsh.1>, L<updatenode(1)|updatenode.1>
L<mkzone(1)|mkzone.1>, L<rmzone(1)|rmzone.1>, L<xdsh(1)|xdsh.1>, L<updatenode(1)|updatenode.1>

View File

@@ -116,7 +116,6 @@ To setup the ODBC for DB2 xcatdb database access, on the SN :
db2sqlsetup -o -C
=item 5.
To setup the DB2 database but not start xcat running with it:
db2sqlsetup -i -S -N

View File

@@ -67,7 +67,6 @@ Output is similar to:
......
=item 2.
To display the tree of service node hierarchy for service node "mysn01".
lstree -s mysn01
@@ -80,7 +79,6 @@ Output is similar to:
|__mycn03
=item 3.
To display the tree of hardware hierarchy for all the nodes.
lstree -H

View File

@@ -176,7 +176,6 @@ Initialize diskless node "clstrn29" using the xCAT osimage called "61dskls". Al
mkdsklsnode -i 61dskls clstrn29 psize=128 sparse_paging=yes
=item 4.
Initialize an xCAT node called "node02" as an AIX diskless node. Create a new NIM machine definition name with the osimage as an extension to the xCAT node name.
mkdsklsnode -n -i 61spot node02

View File

@@ -82,13 +82,11 @@ To make a new zone2 using defaults and make it the default zone enter:
mkzone> zone2 --defaultzone -f
=item 3.
To make a new zone2A using the ssh id_rsa private key in /root/.ssh:
mkzone zone2A -k /root/.ssh
=item 4.
To make a new zone3 and assign the noderange compute3 to the zone enter:
mkzone zone3 -a compute3
@@ -99,7 +97,6 @@ To make a new zone4 and assign the noderange compute4 to the zone and add zone4
mkzone zone4 -a compute4 -g
=item 6.
To make a new zone5 and assign the noderange compute5 to the zone and add zone5 as a group to each node but not allow passwordless ssh between the nodes enter:
mkzone zone5 -a compute5 -g -s no

View File

@@ -10,7 +10,7 @@ B<nodechmac> I<node-name> B<mac=>I<mac-address>
=head1 DESCRIPTION
The B<nodechmac> command changes the MAC address for provisioned nodes network interface.
The B<nodechmac> command changes the MAC address for provisioned node's network interface.
You can use this command to keep an existing node configuration. For example, if an existing node has hardware problems, the replacement node can use the old configurations. By using the nodechmac command, the node name and network settings of the old node can be used by the new node.

View File

@@ -5,7 +5,7 @@ B<piflash> - Remotely applies firmware updates to servers.
=head1 SYNOPSIS
B<piflash> <noderange> --package <filename>
B<piflash> I<noderange> --package I<filename>
=head1 DESCRIPTION

View File

@@ -699,8 +699,7 @@ The output of the query operation:
...
=item 4
Query all the attirbutes for management module node MM1. (For chassis)
Query all the attributes for management module node MM1. (For chassis)
renergy MM1 all

View File

@@ -258,7 +258,6 @@ Output is similar to:
</SYSTEM>
=item 3.
To retrieve 'config' information from the HMC-managed LPAR node3, enter:
rinv node3 config
@@ -270,7 +269,6 @@ Output is similar to:
node5: Total Memory (MB): 1024
=item 4.
To retrieve information about a VMware node vm1, enter:
rinv vm1
@@ -287,7 +285,6 @@ Output is similar to:
B<zVM specific :>
=item 5.
To list the defined network names available for a given node:
rinv pokdev61 --getnetworknames
@@ -304,7 +301,6 @@ Output is similar to:
pokdev61: VSWITCH SYSTEM VSW3
=item 6.
To list the configuration for a given network:
rinv pokdev61 --getnetwork GLAN1
@@ -317,7 +313,6 @@ Output is similar to:
pokdev61: Isolation Status: OFF
=item 7.
To list the disk pool names available:
rinv pokdev61 --diskpoolnames
@@ -329,7 +324,6 @@ Output is similar to:
pokdev61: POOL3
=item 8.
List the configuration for a given disk pool:
rinv pokdev61 --diskpool POOL1 free
@@ -342,7 +336,6 @@ Output is similar to:
=item 9.
List the known zFCP pool names.
rinv pokdev61 --zfcppoolnames
@@ -354,7 +347,6 @@ Output is similar to:
pokdev61: zfcp3
=item 10.
List the SCSI/FCP devices contained in a given zFCP pool:
rinv pokdev61 --zfcppool zfcp1

View File

@@ -58,13 +58,11 @@ To remove zone1 from the zone table and the zonename attribute on all it's assig
rmzone zone1
=item 2.
To remove zone2 from the zone table, the zone2 zonename attribute, and the zone2 group assigned to all nodes that were in zone2, enter:
rmzone zone2 -g
=item 3.
To remove zone3 from the zone table, all the node zone attributes and override the fact it is the defaultzone, enter:
rmzone zone3 -g -f
@@ -79,4 +77,4 @@ Location of the rmzone command.
=head1 B<SEE ALSO>
L <mkzone(1)|mkzone.1>,L <chzone(1)|chzone.1>,L <xdsh(1)|xdsh.1>, L<updatenode(1)|updatenode.1>
L<mkzone(1)|mkzone.1>, L<chzone(1)|chzone.1>, L<xdsh(1)|xdsh.1>, L<updatenode(1)|updatenode.1>

View File

@@ -264,7 +264,7 @@ Get the BMC backup gateway ip address.
=item B<community>={B<public> | I<string>}
Get or set the SNMP commmunity value. The default is B<public>.
Get or set the SNMP community value. The default is B<public>.
=item B<date>=I<mm:dd:yyy>
@@ -348,7 +348,7 @@ Get or set hostname on the service processor.
=item B<vlan>
Get or set vlan ID. For get vlan ID, if vlan is not enabled, 'BMC VLAN disabled' will be outputed. For set vlan ID, the valid value are [1-4096].
Get or set vlan ID. For get vlan ID, if vlan is not enabled, 'BMC VLAN disabled' will be displayed. For set vlan ID, the valid value are [1-4096].
=item B<ipsrc>
@@ -855,7 +855,6 @@ To force service processor failover for cec01:
cec01: 192.168.2.2: sp=primary,ipadd=192.168.2.2,alt_ipadd=unavailable,state=LINE UP
=item 27.
To deconfigure memory bank 9 and 10 of Processing Unit 0 on mm01:
rspconfig mm01 memdecfg=deconfigure:bank:0:9,10
@@ -865,7 +864,6 @@ Output is similar to:
mm01: Success
=item 28.
To reset the network interface of the specified nodes:
rspconfig --resetnet

View File

@@ -135,7 +135,7 @@ xCAT ships some default configuration files
for Ethernet switches and IB switches under
I</opt/xcat/share/xcat/devicetype> directory. If you want to overwrite
any of the configuration files, copy them to I</var/opt/xcat/>
directory and cutomize.
directory and customize.
For example, I<base/IBSwitch/Qlogic/config> is the configuration
file location if devicetype is specified as IBSwitch::Qlogic.
xCAT will first search config file using I</var/opt/xcat/> as the base.
@@ -252,7 +252,6 @@ To execute B<sinv> on the Linux osimage defined for cn1. First build a template
sinv -c "xdsh -i /install/netboot/rhels6/ppc64/test_ramdisk_statelite/rootimg cat /etc/hosts" -e -t 1 -p /tmp/sinv.template -o /tmp/sinv.output
=item 8.
To execute B<sinv> on the AIX NIM 611dskls spot and compare /etc/hosts to compute1 node, run the following:
xdsh compute1 "cat /etc/hosts" | xdshcoll > /tmp/sinv2/template"
@@ -260,7 +259,6 @@ To execute B<sinv> on the AIX NIM 611dskls spot and compare /etc/hosts to comput
sinv -c "xdsh -i 611dskls cat /etc/hosts" -e -t1 -p /tmp/sinv.template -o /tmp/sinv.output
=item 9.
To execute B<sinv> on the device mswitch2 and compare to mswitch1
sinv -c "xdsh mswitch enable;show version" -s mswitch1 -p /tmp/sinv/template --devicetype IBSwitch::Mellanox -l admin -t 2
@@ -275,4 +273,4 @@ Location of the sinv command.
=head1 B<SEE ALSO>
L <xdsh(1)|xdsh.1>, L<noderange(3)|noderange.3>
L<xdsh(1)|xdsh.1>, L<noderange(3)|noderange.3>

View File

@@ -31,7 +31,7 @@ If the Management Node is target node, it must be defined in the xCAT database w
B<REMOTE> B<USER>:
A user_ID can be specified for the remote copy command. Remote user
specification is identical for the xdcp and xdsh commands. See the xdsh
specification is identical for the B<xdcp> and B<xdsh> commands. See the B<xdsh>
command for more information.
B<REMOTE> B<COMMAND> B<COPY>:
@@ -185,7 +185,7 @@ For example, your rsynclist file may look like this:
When you use the append script, the file (left) of the arrow is appended to the file right of the arrow. In this example, /etc/myappenddir/appendfile is appended to /etc/mysetup/setup file, which must already exist on the node. The /opt/xcat/share/xcat/scripts/xdcpappend.sh is used to accomplish this.
Another option is the B<MERGE:> clause in the synclist file. The B<MERGE:> clause is used to append the contents of the input file to /etc/passwd, /etc/group, or /etc/shadow on a Linux node. It is only supported for those files and only on Linux. You must not use both the APPEND and MERGE funcion for these three files. The processing could end up not creating the file you desire. The MERGE function is the preferred method, becuase APPEND only adds to the file. MERGE will add to the file but also insure there are no duplicate entries.
Another option is the B<MERGE:> clause in the synclist file. The B<MERGE:> clause is used to append the contents of the input file to /etc/passwd, /etc/group, or /etc/shadow on a Linux node. It is only supported for those files and only on Linux. You must not use both the APPEND and MERGE funcion for these three files. The processing could end up not creating the file you desire. The MERGE function is the preferred method, because APPEND only adds to the file. MERGE will add to the file but also insure there are no duplicate entries.
For example, your rsynclist file may look like this:
/tmp/share/file2 -> /tmp/file2
@@ -469,7 +469,6 @@ Run:
xdcp compute -F /tmp/myrsync
=item 9.
To rsync all the files in /home/mikev to the compute nodes:
Create a rsync file /tmp/myrsync, with this line:

View File

@@ -70,8 +70,7 @@ The remote shell is determined as follows, in order of precedence:
4. The B</usr/bin/ssh> command.
The remote shell options are determined as follows, in order of prece-
dence:
The remote shell options are determined as follows, in order of precedence:
1. The B<-o> flag.
@@ -525,7 +524,6 @@ To cleanup the servicenode directory that stages the copy of files to the nodes,
xdsh servicenoderange -c
=item 9.
To define the QLogic IB switch as a node and to set up the SSH keys for IB switch
B<qswitch> with device configuration file
B</var/opt/xcat/IBSwitch/Qlogic/config> and user name B<username>, enter
@@ -548,7 +546,6 @@ B<mswitch> with and user name B<username>, enter
xdsh mswitch -l admin --devicetype IBSwitch::Mellanox 'enable;configure terminal;show ssh server host-keys'
=item 12.
To define a BNT Ethernet switch as a node and run a command to create a new vlan with vlan id 3 on the switch.
chdef myswitch groups=all
@@ -562,7 +559,6 @@ If it is for Telnet, add I<tn:> in front of the user name: I<tn:admin>.
dsh myswitch --devicetype EthSwitch::BNT 'enable;configure terminal;vlan 3;end;show vlan'
=item 13.
To run B<xdsh> with the non-root userid "user1" that has been setup as an xCAT userid and with sudo on node1 and node2 to run as root, do the following, see xCAT doc on Granting_Users_xCAT_privileges:
xdsh node1,node2 --sudo -l user1 "cat /etc/passwd"

View File

@@ -114,7 +114,6 @@ the format used in the Description, enter:
xdsh node1,node2,node3 cat /etc/passwd | xdshbak
=item 2.
To display the results of a command issued on several nodes with
identical output displayed only once, enter:

View File

@@ -45,7 +45,7 @@ use xCAT::Utils;
# Main
my $rc = 0;
&parse_args;
my $cmd = "lsdef -t group";
my $cmd = "lsdef -t group | awk '{print \$1}'";
my @grpoutput = xCAT::Utils->runcmd($cmd, 0);
if ($::RUNCMD_RC != 0)
{ # error

View File

@@ -9,7 +9,6 @@
use strict;
#use lib '.';
use Pod::Man;
use Pod::Html;
@@ -21,14 +20,10 @@ if (system("mkdir -p $cachedir")) { die "Error: could not create $cachedir.\n";
my @pods = getPodList($poddir);
#foreach (@pods) { print "$_\n"; } exit;
# Build the cmd overview page.
writesummarypage("$poddir/man1/xcat.1.pod", @pods);
push @pods, "$poddir/man1/xcat.1.pod";
# Build the man page for each pod.
#mkdir($mandir) or die "Error: could not create $mandir.\n";
print "Converting PODs to man pages...\n";
foreach my $podfile (@pods) {
my $manfile = $podfile;
@@ -43,8 +38,6 @@ foreach my $podfile (@pods) {
my @dummyPods = createDummyPods($poddir, \@pods);
# Build the html page for each pod.
#mkdir($htmldir) or die "Error: could not create $htmldir.\n";
print "Converting PODs to HTML pages...\n";
# have to clear the cache, because old entries can cause a problem
@@ -57,7 +50,6 @@ foreach my $podfile (@pods) {
$hdir =~ s|/[^/]*$||; # get rid of the basename part
if (system("mkdir -p $hdir")) { die "Error: could not create $hdir.\n"; }
#print "$podfile, $htmlfile, $poddir, $htmldir\n";
convertpod2html($podfile, $htmlfile, $poddir, $htmldir);
}
@@ -76,16 +68,12 @@ sub createDummyPods {
my ($poddir, $pods) = @_;
my $cmd = "grep -r -E 'L<.+\\([57]\\)\\|.+\\.[57]>' " . $poddir;
#print "Running cmd: ", $cmd, "\n";
my @lines = `$cmd`;
if ($?) { print "Did not find any section 5 man page, creating dummy pods...\n"; print join('', @lines); }
#my @lines;
#system($cmd);
my @dummyPods;
foreach my $l (@lines) {
#print "$l\n";
my @matches = $l =~ /L<([^\(]+)\(([57])\)\|\1\.[57]>/g; # get all the matches in the line
# The above line should create the array with every other entry being the man page name
# and every other entry is the section # (5 or 7)
@@ -149,11 +137,11 @@ sub writesummarypage {
print FILE <<'EOS1';
=head1 NAME
B<xcat> - extreme Cluster Administration Tool.
B<xcat> - Extreme Cloud Administration Toolkit.
=head1 DESCRIPTION
Extreme Cluster Administration Toolkit (xCAT). xCAT is a scalable distributed computing management
Extreme Cloud Administration Toolkit (xCAT). xCAT is a scalable distributed computing management
and provisioning tool that provides a unified interface for hardware control, discovery, and
OS diskful/diskfree deployment.

View File

@@ -9,7 +9,6 @@
use strict;
#use lib '.';
use Pod::Man;
use Pod::Html;
@@ -20,13 +19,6 @@ my $cachedir = '/tmp';
my @pods = getPodList($poddir);
#foreach (@pods) { print "$_\n"; } exit;
# Build the cmd overview page.
#writesummarypage("$poddir/man1/xcat.1.pod", @pods);
# Build the man page for each pod.
#mkdir($mandir) or die "Error: could not create $mandir.\n";
print "Converting PODs to man pages...\n";
foreach my $podfile (@pods) {
my $manfile = $podfile;
@@ -41,8 +33,6 @@ foreach my $podfile (@pods) {
my @dummyPods = createDummyPods($poddir, \@pods);
# Build the html page for each pod.
#mkdir($htmldir) or die "Error: could not create $htmldir.\n";
print "Converting PODs to HTML pages...\n";
# have to clear the cache, because old entries can cause a problem
@@ -55,7 +45,6 @@ foreach my $podfile (@pods) {
$hdir =~ s|/[^/]*$||; # get rid of the basename part
if (system("mkdir -p $hdir")) { die "Error: could not create $hdir.\n"; }
#print "$podfile, $htmlfile, $poddir, $htmldir\n";
convertpod2html($podfile, $htmlfile, $poddir, $htmldir);
}
@@ -74,27 +63,21 @@ sub createDummyPods {
my ($poddir, $pods) = @_;
my $cmd = "grep -r -E 'L<.+\\([57]\\)\\|.+\\.[57]>' " . $poddir;
#print "Running cmd: ", $cmd, "\n";
my @lines = `$cmd`;
if ($?) { print "Did not find any section 5 man page, creating dummy pods...\n"; print join('', @lines); }
#my @lines;
#system($cmd);
my @dummyPods;
foreach my $l (@lines) {
#print "$l\n";
my @matches = $l =~ /L<([^\(]+)\(([57])\)\|\1\.[57]>/g; # get all the matches in the line
# The above line should create the array with every other entry being the man page name
# and every other entry is the section # (5 or 7)
my $cmd;
while ($cmd = shift @matches) {
#foreach my $m (@matches) {
my $section = shift @matches;
my $filename = "$poddir/man$section/$cmd.$section.pod";
#print "$filename\n";
if (!(grep /^$filename$/, @$pods) && !(grep /^$filename$/, @dummyPods)) { push @dummyPods, $filename; }
}
}
@@ -133,71 +116,6 @@ sub getPodList {
return sort @files;
}
# Create the xcat man page that gives a summary description of each xcat cmd.
sub writesummarypage {
my $file = shift; # relative path file name of the man page
# the rest of @_ contains the pod files that describe each cmd
open(FILE, ">$file") or die "Error: could not open $file for writing.\n";
print FILE <<'EOS1';
=head1 NAME
B<xcat> - extreme Cluster Administration Tool.
=head1 DESCRIPTION
Extreme Cluster Administration Toolkit (xCAT). xCAT is a scalable distributed computing management
and provisioning tool that provides a unified interface for hardware control, discovery, and
OS diskful/diskfree deployment.
=head1 XCAT DATABASE
All of the cluster configuration information is in the xCAT database. See L<xcatdb(5)|xcatdb.5> for
descriptions of every table in the database.
=head1 XCAT COMMANDS
What follows is a short description of each xCAT command. To get more information about a particular
command, see its man page. Note that the commands are listed in alphabetical order B<within each section>,
i.e. all the commands in section 1, then the commands in section 3, etc.
=over 12
EOS1
# extract the summary for each cmd from its man page
foreach my $manpage (@_) {
my ($sectionnum) = $manpage =~ /\.(\d+)\.pod$/;
# Suck in the whole file, then we will parse it.
open(MANPAGE, "$manpage") or die "Error: could not open $manpage for reading.\n";
my @contents = <MANPAGE>;
my $wholemanpage = join('', @contents);
close(MANPAGE);
# This regex matches: optional space, =head1, space, title, space, cmd, space, description, newline
my ($cmd, $description) = $wholemanpage =~ /^\s*=head1\s+\S+\s+(\S+)\s+(.+?)\n/si;
if (!defined($cmd)) { print "Warning: $manpage is not in a recognized structure. It will be ignored.\n"; next; }
if (!defined($description)) { print "Warning: $manpage does not have a description for $cmd. It will be ignored.\n"; next; }
$cmd =~ s/^.<(.+)>$/$1/; # if the cmd name has pod formatting around it, strip it off
$description =~ s/^-\s*//; # if the description has a leading hypen, strip it off
print FILE "\n=item L<$cmd($sectionnum)|$cmd.$sectionnum>\n\n" . $description . "\n";
}
# Artificially add the xcattest cmd, because the xCAT-test rpm will add this
print FILE "\n=item L<xcattest(1)|xcattest.1>\n\nRun automated xCAT test cases.\n";
print FILE <<"EOS3";
=back
EOS3
close FILE;
}
# Create the html page for one pod.
sub convertpod2html {
my ($podfile, $htmlfile, $poddir, $htmldir) = @_;

View File

@@ -126,10 +126,15 @@ sub obtain_log_file_list {
my $self = shift;
my %candidate_log;
my @loglist = ("/var/log/messages",
"/var/log/xcat/cluster.log",
"/var/log/xcat/computes.log",
"/var/log/syslog");
my @candidate_log_set;
push @candidate_log_set, "/var/log/messages" if (-e "/var/log/messages");
push @candidate_log_set, "/var/log/xcat/cluster.log" if (-e "/var/log/xcat/cluster.log");
push @candidate_log_set, "/var/log/xcat/computes.log" if (-e "/var/log/xcat/computes.log");
foreach my $log (@loglist){
push @candidate_log_set, $log if (-e "$log");
}
my $filename;
foreach my $log (@candidate_log_set) {

View File

@@ -375,16 +375,18 @@ sub is_tftp_ready {
rename("/$test_dir/tftptestt.tmp", "/$test_dir/tftptestt.tmp.old") if (-e "/$test_dir/tftptestt.tmp");
rename("./tftptestt.tmp", "./tftptestt.tmp.old") if (-e "./tftptestt.tmp");
system("touch /$test_dir/tftptestt.tmp");
system("date > /$test_dir/tftptestt.tmp");
my $output = `tftp -4 -v $mnip -c get /tftptest/tftptestt.tmp 2>&1`;
if ((!$?) && (-e "./tftptestt.tmp")) {
if ((!$?) && (-s "./tftptestt.tmp")) {
unlink("./tftptestt.tmp");
rename("./tftptestt.tmp.old", "./tftptestt.tmp") if (-e "./tftptestt.tmp.old");
rename("/$test_dir/tftptestt.tmp.old", "/$test_dir/tftptestt.tmp") if (-e "/$test_dir/tftptestt.tmp.old");
system("rm -rf $test_dir");
return 1;
} else {
rename("./tftptestt.tmp.old", "./tftptestt.tmp") if (-e "./tftptestt.tmp.old");
rename("/$test_dir/tftptestt.tmp.old", "/$test_dir/tftptestt.tmp") if (-e "/$test_dir/tftptestt.tmp.old");
system("rm -rf $test_dir");
return 0;
}
}
@@ -512,6 +514,42 @@ sub parse_node_range {
#------------------------------------------
=head3
Description:
Test if chrony service is ready to use in current operating system
Arguments:
errormsg_ref: (output attribute) if there is something wrong for chrony service, this attribute save the possible reason.
Returns:
1 : yes
0 : no
=cut
#------------------------------------------
sub is_chrony_ready {
my $errormsg_ref = shift;
$errormsg_ref = shift if (($errormsg_ref) && ($errormsg_ref =~ /probe_utils/));
my $chronycoutput = `chronyc tracking 2>&1`;
if ($?) {
if ($chronycoutput =~ /Cannot talk to daemon/) {
$$errormsg_ref = "chronyd service is not running! Please setup ntp in current node";
return 0;
}
$$errormsg_ref = "command 'chronyc tracking' failed, could not get status of ntp service";
return 0;
}
if ($chronycoutput =~ /Leap status : (.+)/) {
my $status = $1;
if ($status eq "Not synchronised") {
$$errormsg_ref = "chronyd did not synchronize.";
return 0;
}
}
return 1;
}
#------------------------------------------
=head3
Description:
Test if ntp service is ready to use in current operating system
@@ -525,7 +563,7 @@ sub parse_node_range {
#------------------------------------------
sub is_ntp_ready{
my $errormsg_ref = shift;
$errormsg_ref= shift if (($errormsg_ref) && ($errormsg_ref =~ /probe_utils/));
$errormsg_ref = shift if (($errormsg_ref) && ($errormsg_ref =~ /probe_utils/));
my $cmd = 'ntpq -c "rv 0"';
$| = 1;
@@ -659,6 +697,72 @@ sub convert_second_to_time {
#------------------------------------------
=head3
Description:
Call get_files_recursive to get all files under given dir,
and save to target file
Arguments:
dir: the dir want to get files
target_file: the file to save files list
=cut
#------------------------------------------
sub list_files_to_file {
my $src_dir = shift;
$src_dir = shift if (($src_dir) && ($src_dir =~ /probe_utils/));
my $target_file = shift;
my $errormsg_ref = shift;
my @files = ();
get_files_recursive("$src_dir", \@files);
my $all_file = join("\n", @files);
if (!open f,"> $target_file") {
$$errormsg_ref = "Can not open file $target_file to save files list";
return 1;
}
print f $all_file;
close f;
return 0;
}
#------------------------------------------
=head3
Description:
Get all files under the given dir
Arguments:
dir: the dir want to get files
files_path_ref: list of all files
=cut
#------------------------------------------
sub get_files_recursive {
my $dir = shift;
my $files_path_ref = shift;
my $fd = undef;
opendir($fd, $dir);
for (; ;)
{
my $direntry = readdir($fd);
last unless (defined($direntry));
next if ($direntry =~ m/^\.\w*/);
next if ($direntry eq '..');
my $target = "$dir/$direntry";
if (-d $target) {
get_files_recursive($target, $files_path_ref);
} else {
push(@{$files_path_ref}, glob("$target\n"));
}
}
closedir($fd);
}
#------------------------------------------
=head3
Description:
print table

View File

@@ -87,8 +87,9 @@ sub do_main_job {
$rc |= $rst;
#check important directory
$rst = check_directory(\%sitetable, \$checkpoint, \@error);
print_check_result($checkpoint, "f", $rst, \@error);
($rst, $flag) = check_directory(\%sitetable, \$checkpoint, \@error);
print_check_result($checkpoint, $flag, $rst, \@error);
$rst = 0 if ($flag == "w");
$rc |= $rst;
#check if SElinux is disabled
@@ -376,19 +377,28 @@ sub check_site_table {
$$checkpoint_ref = "Checking 'site' table is configured...";
@$error_ref = ();
my @attr_list = ("master", "domain", "installdir", "tftpdir");
my @attr_list = ("master", "domain", "installdir", "tftpdir", "sharedtftp", "installloc");
foreach my $attr (@attr_list) {
my $value;
$value = `lsdef -t site -i $attr -c 2>&1 | awk -F'=' '{print \$2}'`;
chomp($value);
if ($value eq "") {
push @$error_ref, "There isn't '$attr' definition in 'site' table";
$rst = 1;
} else {
if (($attr eq "master") and (!probe_utils->is_ip_addr("$value"))) {
push @$error_ref, "The value of 'master' in 'site' table isn't an IP addres";
if ($attr ne "sharedtftp" and $attr ne "installloc") {
push @$error_ref, "There isn't '$attr' definition in 'site' table";
$rst = 1;
}
} else {
if (($attr eq "master") and (!xCAT::NetworkUtils->isIpaddr("$value"))) {
my $masterip = xCAT::NetworkUtils->getipaddr("$value");
if (! defined $masterip){
push @$error_ref, "The value of 'master' in 'site' table isn't an IP addres and can not be resolved to a IP addres";
$rst = 1;
}else{
probe_utils->send_msg("$output", "i", "The value of 'master' in 'site' table is $value, can be resolve to $masterip") if($verbose);
$value = $masterip;
}
}
$sitetable_ref->{$attr} = $value;
}
}
@@ -523,6 +533,7 @@ sub check_directory {
my $checkpoint_ref = shift;
my $error_ref = shift;
my $rst = 0;
my $flag = "f";
my @dir_list = ("installdir", "tftpdir");
my $dir_list_str = join(",",@dir_list);
@@ -539,29 +550,67 @@ sub check_directory {
$rst = 1;
} else {
if ($is_sn) {
my $mountoutput = `mount 2>&1| grep '$sitetable_ref->{$dir}'`;
chomp($mountoutput);
if (($dir eq "tftpdir" and $sitetable_ref->{sharedtftp} eq "1") or
($dir eq "installdir" and $sitetable_ref->{installloc})) {
my $mountoutput = `mount 2>&1| grep '$sitetable_ref->{$dir}'`;
chomp($mountoutput);
my $mountip;
if ($mountoutput =~ /(.+):$sitetable_ref->{$dir} on $sitetable_ref->{$dir} /) {
my $mountsource = $1;
if (xCAT::NetworkUtils->isIpaddr($mountsource)) {
$mountip = $mountsource;
} else {
$mountip = xCAT::NetworkUtils->getipaddr($mountsource);
my $mountip;
if ($mountoutput =~ /(.+):$sitetable_ref->{$dir} on $sitetable_ref->{$dir} /) {
my $mountsource = $1;
if (xCAT::NetworkUtils->isIpaddr($mountsource)) {
$mountip = $mountsource;
} else {
$mountip = xCAT::NetworkUtils->getipaddr($mountsource);
}
}
}
if ($mountip ne $sitetable_ref->{master}) {
push @$error_ref, "$dir '$sitetable_ref->{$dir}' isn't mounted from the management node,please check SN's configuration";
$rst = 1;
if ($mountip ne $sitetable_ref->{master}) {
push @$error_ref, "$dir '$sitetable_ref->{$dir}' isn't mounted from the management node,please check SN's configuration";
$rst = 1;
}
} elsif (($dir eq "tftpdir" and $sitetable_ref->{sharedtftp} ne "1") or
($dir eq "installdir" and !$sitetable_ref->{installloc})) {
my $tmpdir = "/tmp/tmp_$dir";
mkdir($tmpdir);
my $mountrst = `mount $sitetable_ref->{master}:$sitetable_ref->{$dir} $tmpdir 2>&1`;
if ($?) {
push @$error_ref, "mount $sitetable_ref->{master}:$sitetable_ref->{$dir} $tmpdir failed";
$flag = "w" if (!$rst);
$rst = 1;
} else {
my $dir_list_file = "/tmp/$dir" . "_list";
my $mnt_list_file = $tmpdir . "_list";
my $error;
if (probe_utils->list_files_to_file($sitetable_ref->{$dir}, $dir_list_file, \$error)) {
push @$error_ref, "$error";
$flag = "w" if (!$rst);
$rst = 1;
} elsif (probe_utils->list_files_to_file($tmpdir, $mnt_list_file, \$error)) {
push @$error_ref, "$error";
$flag = "w" if (!$rst);
$rst = 1;
} else {
`sed -i "s|^$tmpdir|$sitetable_ref->{$dir}|g" $mnt_list_file`;
my $diffout = `diff $dir_list_file $mnt_list_file`;
if ($diffout) {
push @$error_ref, "$dir '$sitetable_ref->{$dir}' is not the same with MN's, may be error";
$flag = "w" if (!$rst);
$rst = 1;
}
}
`umount $tmpdir`;
unlink($dir_list_file) if (-e "$dir_list_file");
unlink($mnt_list_file) if (-e "$mnt_list_file");
rmdir($tmpdir);
}
}
}
}
}
}
return $rst;
return ($rst, $flag);
}
@@ -848,13 +897,22 @@ sub check_ntp_service{
$$checkpoint_ref = "Checking NTP service is configured...";
@$error_ref = ();
`which ntpq > /dev/null 2>&1`;
`which chronyc > /dev/null 2>&1`;
if($?){
push @$error_ref, "ntp service check need 'ntpq' tool, please install 'ntpq' toll and try again";
$rst = 1;
}else{
`which ntpq > /dev/null 2>&1`;
if ($?) {
push @$error_ref, "ntp service check need 'ntpq' or 'chronyc' tool, please install and try again";
$rst = 1;
} else {
my $error;
if(!probe_utils->is_ntp_ready(\$error)){
push @$error_ref, "$error";
$rst = 1;
}
}
} else{
my $error;
if(!probe_utils->is_ntp_ready(\$error)){
if(!probe_utils->is_chrony_ready(\$error)){
push @$error_ref, "$error";
$rst = 1;
}
@@ -1152,7 +1210,7 @@ sub get_attribute_value {
if ($command_info =~ /$attr=(\d+)/) {
$cmd_value = $1
}
print_check_result("Checking $table table attribute... $attr=$cmd_value", 0, 0, \@error);
probe_utils->send_msg("$output", "i", "Checking $table table attribute... $attr=$cmd_value") if($verbose);
return $cmd_value
}
@@ -1281,7 +1339,7 @@ while ($hierarchy_instance->read_reply(\%reply_cache)) {
#print ">>>$reply_cache{$servers}->[$_]<<<\n";
#For cases like below:
#c910f02c04p04: [ok] :All xCAT daemons are running
if ($reply_cache{$servers}->[$_] =~ /^(\w+)\s*:\s*(\[\w+\]\s*):\s*(.*)/) {
if ($reply_cache{$servers}->[$_] =~ /^(\S+)\s*:\s*(\[\w+\]\s*):\s*(.*)/) {
if ("$1" eq "$server") {
$logmsg = "$2: $3";
$msg = "$2:[$server]: $3";
@@ -1289,7 +1347,7 @@ while ($hierarchy_instance->read_reply(\%reply_cache)) {
#For cases like below:
#sn02: ssh: connect to host sn02 port 22: No route to host
} elsif ($reply_cache{$servers}->[$_] =~ /^(\w+)\s*:\s*(ssh:.+)/){
} elsif ($reply_cache{$servers}->[$_] =~ /^(\S+)\s*:\s*(ssh:.+)/){
if("$1" eq "$server") {
$logmsg = "$2";
$msg = "[failed] :[$server]: $2";
@@ -1297,7 +1355,7 @@ while ($hierarchy_instance->read_reply(\%reply_cache)) {
#For cases like below:
#c910f02c04p05: IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
} elsif ($reply_cache{$servers}->[$_] =~ /^(\w+)\s*:\s*(.*)/) {
} elsif ($reply_cache{$servers}->[$_] =~ /^(\S+)\s*:\s*(.*)/) {
if ("$1" eq "$server") {
$logmsg = "$2";
$msg = "[$server]: $2";

View File

@@ -658,7 +658,7 @@ sub process_request_nmap {
if ($ip6 and scalar(@ip6s)) {
open($fping, "nmap --unprivileged -6 -PS$ports,3001 -n --send-ip -p $ports,3001 $more_options " . join(' ', @ip6s) . " 2> /dev/null|") or die("Can't start nmap: $!");
} elsif (not $ip6 and scalar(@ips)) {
open($fping, "nmap --unprivileged -PE -n --send-ip -p $ports,3001 $more_options " . join(' ', @ips) . " 2> /dev/null|") or die("Can't start nmap: $!");
open($fping, "nmap --unprivileged -PA80,443,22 -PE -n --send-ip -p $ports,3001 $more_options " . join(' ', @ips) . " 2> /dev/null|") or die("Can't start nmap: $!");
} else { next; }
while (<$fping>) {
if (/Interesting ports on ([^ ]*)[: ]/ or /Nmap scan report for ([^ ]*)/) {

View File

@@ -426,21 +426,19 @@ sub parseprescripts
my $scripts = shift;
my $action = shift;
my $ret;
if ($scripts) {
if ($scripts =~ /:/) {
my @a = split(/\|/, $scripts);
foreach my $token (@a) {
#print "token=$token, action=$action\n";
if ($scripts) {
foreach my $token (split(/\|/, $scripts)) {
if ($token =~ /:/) {
if ($token =~ /^$action:(.*)/) {
$ret = $1;
last;
$ret .= "$1,";
}
} else {
$ret .= "$token,";
}
} else {
$ret = $scripts;
}
}
return $ret;
}

View File

@@ -201,7 +201,7 @@ elif [ -r /rootimg-statelite.gz ]; then
MAXTRIES=5
ITER=0
if [ -z $MNTOPTS ]; then
MNT_OPTIONS="nolock,rsize=32768,tcp,nfsvers=3,timeo=14"
MNT_OPTIONS="nolock,rsize=32768,tcp,timeo=14"
else
MNT_OPTIONS=$MNTOPTS
fi

View File

@@ -53,7 +53,7 @@ if [ ! -z $SNAPSHOTSERVER ]; then
MAXTRIES=5
ITER=0
if [ -z $MNTOPTS ]; then
MNT_OPTIONS="nolock,rsize=32768,tcp,nfsvers=3,timeo=14"
MNT_OPTIONS="nolock,rsize=32768,tcp,timeo=14"
else
MNT_OPTIONS=$MNTOPTS
fi

View File

@@ -202,7 +202,7 @@ elif [ -r /rootimg-statelite.gz ]; then
MAXTRIES=5
ITER=0
if [ -z $MNTOPTS ]; then
MNT_OPTIONS="nolock,rsize=32768,tcp,nfsvers=3,timeo=14"
MNT_OPTIONS="nolock,rsize=32768,tcp,timeo=14"
else
MNT_OPTIONS=$MNTOPTS
fi

View File

@@ -63,8 +63,8 @@ function usage()
${script} --xcat-core=http://xcat.org/path/to/xcat-core.tar.bz2 \\
--xcat-dep=http://xcat.org/path/to/xcat-dep.tar.bz2 install
xCAT (Extreme Cloud/Cluster Administration Toolkit): <http://xcat.org/>
Full documentation at: <http://xcat-docs.readthedocs.io/en/stable/>
xCAT: http://xcat.org
Full documentation at: http://xcat-docs.readthedocs.io/en/stable
EOF
}

View File

@@ -265,14 +265,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
makentp_v
makentp_h
nodeset_check_warninginfo

View File

@@ -252,14 +252,6 @@ xdsh_t
xdsh_q
xdsh_T
xdsh_o
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
makentp_v
makentp_h
nodeset_check_warninginfo

View File

@@ -187,14 +187,6 @@ xdsh_t
xdsh_q
xdsh_T
xdsh_o
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
nodeset_check_warninginfo
reg_linux_diskless_installation_flat
reg_linux_statelite_installation_flat

View File

@@ -261,14 +261,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
makentp_v
makentp_h
nodeset_check_warninginfo

View File

@@ -196,14 +196,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
nodeset_shell
nodeset_cmdline
nodeset_runimg

View File

@@ -268,14 +268,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
makentp_v
makentp_h
nodeset_check_warninginfo

View File

@@ -199,14 +199,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
nodeset_check_warninginfo
xcatconfig_u_check_xcatsslversion_rhels_sles
xcatconfig_c

View File

@@ -262,14 +262,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
nodeset_shell
nodeset_cmdline
nodeset_runimg

View File

@@ -255,14 +255,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
nodeset_shell
nodeset_cmdline
nodeset_runimg

View File

@@ -196,14 +196,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
nodeset_check_warninginfo
nodeset_shell
nodeset_cmdline

View File

@@ -267,14 +267,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
makentp_v
makentp_h
nodeset_check_warninginfo

View File

@@ -261,14 +261,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
nodeset_shell
nodeset_cmdline
nodeset_runimg

View File

@@ -215,14 +215,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
nodeset_check_warninginfo
nodeset_shell
nodeset_cmdline

View File

@@ -311,14 +311,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
makentp_v
makentp_h
nodeset_check_warninginfo

View File

@@ -306,14 +306,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
makentp_v
makentp_h
nodeset_check_warninginfo

View File

@@ -267,14 +267,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
nodeset_check_warninginfo
xcatconfig_u_check_xcatsslversion_rhels_sles
xcatconfig_c

View File

@@ -250,14 +250,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
makentp_v
makentp_h
nodeset_check_warninginfo

View File

@@ -204,14 +204,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
nodeset_check_warninginfo
xcatd_start
xcatd_stop

View File

@@ -204,14 +204,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
nodeset_shell
nodeset_cmdline
nodeset_runimg

View File

@@ -200,14 +200,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
nodeset_shell
nodeset_cmdline
nodeset_runimg

View File

@@ -219,14 +219,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
xcatd_start
xcatd_stop
xcatd_restart

View File

@@ -216,14 +216,6 @@ prsync_h
prsync_v
prsync_dir_node
prsync_file_node
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
nodeset_check_warninginfo
xcatd_start
xcatd_stop

View File

@@ -187,14 +187,6 @@ xdsh_t
xdsh_q
xdsh_T
xdsh_o
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
nodeset_shell
nodeset_cmdline
nodeset_runimg

View File

@@ -191,14 +191,6 @@ xdsh_t
xdsh_q
xdsh_T
xdsh_o
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
nodeset_shell
nodeset_cmdline
nodeset_runimg

View File

@@ -182,14 +182,6 @@ xdsh_t
xdsh_q
xdsh_T
xdsh_o
switchdiscover_range_default
switchdiscover_h
switchdiscover_range_s
switchdiscover_range_default_w
switchdiscover_range_r
switchdiscover_range_x
switchdiscover_range_z
switchdiscover_range_z_V
nodeset_shell
nodeset_cmdline
nodeset_runimg

Some files were not shown because too many files have changed in this diff Show More