Sign Up
Log In
Log In
or
Sign Up
Places
All Projects
Status Monitor
Collapse sidebar
No build reason found for SLE-Module-Web-Scripting:aarch64
home:Alexander_Naumov:SLE12
ctdb
ctdb-sysconfig-suse.template
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
File ctdb-sysconfig-suse.template of Package ctdb
### Options to ctdbd. This is read by /etc/init.d/ctdb ## Path: Network/Ctdb ## Description: Ctdb location of the shared lock file ## Type: string ## Default: "" # you must specify the location of a shared lock file across all the # nodes. This must be on shared storage # there is no default CTDB_RECOVERY_LOCK="" ## Description: Ctdb public network interface ## Type: string ## Default: "" # when doing IP takeover you also may specify what network interface # to use by default for the public addresses. Otherwise you must # specify an interface on each line of the public addresses file # there is no default CTDB_PUBLIC_INTERFACE=eth0 ## Description: Location of the file with the public IP addresses ## Type: string ## Default: /etc/ctdb/public_addresses # Should ctdb do IP takeover? If it should, then specify a file # containing the list of public IP addresses that ctdb will manage # Note that these IPs must be different from those in $NODES above # there is no default. # The syntax is one line per public address of the form : # <ipaddress>/<netmask> <interface> # Example: 10.1.1.1/24 eth0 CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses ## Description: Use single LVS public IP? ## Type: ip ## Default: "" # Should CTDB present the cluster using a single public ip address to clients # and multiplex clients across all CONNECTED nodes ? # This is based on LVS # When this is enabled, the entire cluster will present one single ip address # which clients will connect to. CTDB_LVS_PUBLIC_IP= ## Description: Ctdb manages to start and stop Samba? ## Type: yesno ## Default: yes # should ctdb manage starting/stopping the Samba service for you? # default is to not manage Samba CTDB_MANAGES_SAMBA=yes ## Description: Skip Samba share checks? # If there are very many shares it may not be feasible to check that all # of them are available during each monitoring interval. # In that case this check can be disabled ## Type: yesno ## Default: yes CTDB_SAMBA_SKIP_SHARE_CHECK=yes ## Description: Skip NFS share checks? # If there are very many shares it may not be feasible to check that all # of them are available during each monitoring interval. # In that case this check can be disabled ## Type: yesno ## Default: yes CTDB_NFS_SKIP_SHARE_CHECK=yes ## Description: Samba check ports? ## Type: integer ## Default: # specify which ports we should check that there is a daemon listening to # by default we use testparm and look in smb.conf to figure out. # CTDB_SAMBA_CHECK_PORTS="445" ### Must we remove the leading ^# and which default to set when we're happy ### with ctdb's default? ## Description: Manage winbind? ## Type: yesno ## Default: yes # should ctdb manage starting/stopping Winbind service? # if left comented out then it will be autodetected based on smb.conf CTDB_MANAGES_WINBIND=yes ## Description: Manage vsftpd? ## Type: yesno ## Default: yes # should ctdb manage starting/stopping the VSFTPD service CTDB_MANAGES_VSFTPD=yes ## Description: Manage iSCSI? ## Type: yesno ## Default: yes # should ctdb manage starting/stopping the ISCSI service CTDB_MANAGES_ISCSI=yes ## Description: Manage NFS? ## Type: yesno ## Default: yes # should ctdb manage starting/stopping the NFS service CTDB_MANAGES_NFS=yes ## Description: Manage Apache? ## Type: yesno ## Default: yes # should ctdb manage starting/stopping the Apache web server httpd? CTDB_MANAGES_HTTPD=yes ## Description: Init script style ## Type: string ## Default: "" # The init style (redhat/suse/ubuntu...) is usually auto-detected. # The names of init scripts of services managed by CTDB are set # based on the detected init style. You can override the init style # auto-detection here to explicitly use a scheme. This might be # useful when you have installed a packages (for instance samba # packages) with a different init script layout. # There is no default. CTDB_INIT_STYLE= ## Description: Samba smb services init script # The following is a smb specific Samba init script / service that you # can override from auto-detection. ## Type: string ## Default: smb CTDB_SERVICE_SMB=smb ## Description: Samba nmb services init script # The following is a nmb specific Samba init script / service that you # can override from auto-detection. ## Type: string ## Default: nmb CTDB_SERVICE_NMB=nmb ## Description: Samba winbind services init script # The following is a winbind specific Samba init script / service that you # can override from auto-detection. ## Type: string ## Default: winbind CTDB_SERVICE_WINBIND=winbind # you may wish to raise the file descriptor limit for ctdb # use a ulimit command here. ctdb needs one file descriptor per # connected client (ie. one per connected client in Samba) # ulimit -n 10000 ## Description: This file enumerates all nodes of the cluster ## Type: string ## Default: /etc/ctdb/nodes # the NODES file must be specified or ctdb won't start # it should contain a list of IPs that ctdb will use # it must be exactly the same on all cluster nodes # defaults to /etc/ctdb/nodes CTDB_NODES=/etc/ctdb/nodes ## Description: Script used to notify about node health changes ## Type: string ## Default: /etc/ctdb/notify.sh # a script to run when node health changes CTDB_NOTIFY_SCRIPT=/etc/ctdb/notify.sh ## Description: Database location ## Type: string ## Default: /var/lib/ctdb # the directory to put the local ctdb database files in # defaults to /var/lib/ctdb CTDB_DBDIR=/var/lib/ctdb ## Description: Persistent database location ## Type: string ## Default: /var/lib/ctdb/persistent # the directory to put the local persistent ctdb database files in # defaults to /var/lib/ctdb/persistent CTDB_DBDIR_PERSISTENT=/var/lib/ctdb/persistent ## Description: Event script directory location ## Type: string ## Default: /etc/ctdb/events.d # the directory where service specific event scripts are stored # defaults to /etc/ctdb/events.d CTDB_EVENT_SCRIPT_DIR=/etc/ctdb/events.d ## Description: Socket location ## Type: string ## Default: /var/lib/ctdb/ctdb.socket # the location of the local ctdb socket # defaults to /var/lib/ctdb/ctdb.socket CTDB_SOCKET=/var/lib/ctdb/ctdb.socket ## Description: Type of transport ## Type: string ## Default: tcp # what transport to use. Only tcp is currently supported # defaults to tcp CTDB_TRANSPORT="tcp" ## Description: Minimal amount of free memory ## Type: integer ## Default: 100 # When set, this variable makes ctdb monitor the amount of free memory # in the system (the second number in the buffers/cache output from free -m). # If the amount of free memory drops below this treshold the node will become # unhealthy and ctdb and all managed services will be shutdown. # Once this occurs, the administrator needs to find the reason for the OOM # situation, rectify it and restart ctdb with "service ctdb start" # The unit is MByte CTDB_MONITOR_FREE_MEMORY=100 ## Description: Start ctdb disabled? ## Type: yesno ## Default: yes # When set to yes, the CTDB node will start in DISABLED mode and not host # any public ip addresses. The administrator needs to explicitely enable # the node with "ctdb enable" CTDB_START_AS_DISABLED="yes" ## Description: RECMASTER capability. # By default all nodes are capable of both being LMASTER for records and # also for taking the RECMASTER role and perform recovery. # These parameters can be used to disable these two roles on a node. # Note: If there are NO available nodes left in a cluster that can perform # the RECMASTER role, the cluster will not be able to recover from a failure # and will remain in RECOVERY mode until an RECMASTER capable node becomes # available. Same for LMASTER. # These parameters are useful for scenarios where you have one "remote" node # in a cluster and you do not want the remote node to be fully participating # in the cluster and slow things down. # For that case, set both roles to "no" for the remote node on the remote site # but leave the roles default to "yes" on the primary nodes in the central # datacentre. ## Type: yesno ## Default: yes CTDB_CAPABILITY_RECMASTER=yes ## Description: LMASTER capability. # By default all nodes are capable of both being LMASTER for records and # also for taking the RECMASTER role and perform recovery. # These parameters can be used to disable these two roles on a node. # Note: If there are NO available nodes left in a cluster that can perform # the RECMASTER role, the cluster will not be able to recover from a failure # and will remain in RECOVERY mode until an RECMASTER capable node becomes # available. Same for LMASTER. # These parameters are useful for scenarios where you have one "remote" node # in a cluster and you do not want the remote node to be fully participating # in the cluster and slow things down. # For that case, set both roles to "no" for the remote node on the remote site # but leave the roles default to "yes" on the primary nodes in the central # datacentre. ## Type: yesno ## Default: yes CTDB_CAPABILITY_LMASTER=yes # NAT-GW configuration # Some services running on nthe CTDB node may need to originate traffic to # remote servers before the node is assigned any IP addresses, # This is problematic since before the node has public addresses the node might # not be able to route traffic to the public networks. # One solution is to have static public addresses assigned with routing # in addition to the public address interfaces, thus guaranteeing that # a node always can route traffic to the external network. # This is the most simple solution but it uses up a large number of # additional ip addresses. # # A more complex solution is NAT-GW. # In this mode we only need one additional ip address for the cluster from # the exsternal public network. # One of the nodes in the cluster is elected to be hosting this ip address # so it can reach the external services. This node is also configured # to use NAT MASQUERADING for all traffic from the internal private network # to the external network. This node is the NAT-GW node. # # All other nodes are set up with a default rote with a metric of 10 to point # to the nat-gw node. # # The effect of this is that only when a node does not have a public address # and thus no proper routes to the external world it will instead # route all packets through the nat-gw node. # ## Description: NAT gateway public IP ## Type: ip ## Default: "" NATGW_PUBLIC_IP= ## Description: NAT gateway public interface ## Type: string ## Default: "" NATGW_PUBLIC_IFACE= ## Description: NAT gateway default gateway ## Type: ip ## Default: "" NATGW_DEFAULT_GATEWAY= ## Description: NAT gateway private interface ## Type: string ## Default: "" NATGW_PRIVATE_IFACE= ## Description: NAT gateway network ## Type: ip ## Default: "" NATGW_PRIVATE_NETWORK= ## Description: NAT gateway nodes ## Type: string ## Default: /etc/ctdb/natgw_nodes # NATGW_NODES is the list of nodes that belong to this natgw group. # You can have multiple natgw groups in one cluster but each node # can only belong to one single natgw group. NATGW_NODES=/etc/ctdb/natgw_nodes ## Description: Ctdb log file location ## Type: string ## Default: /var/log/ctdb/log.ctdb # where to log messages # the default is /var/log/ctdb/log.ctdb CTDB_LOGFILE=/var/log/ctdb/log.ctdb ## Description: Ctdb debug level ## Type: integer(0:10) ## Default: 2 # what debug level to run at. Higher means more verbose # the default is 2 CTDB_DEBUGLEVEL=2 ## Description: Ctdb any other option ## Type: string ## Default: "" # any other options you might want. Run ctdbd --help for a list CTDB_OPTIONS=
Locations
Projects
Search
Status Monitor
Help
OpenBuildService.org
Documentation
API Documentation
Code of Conduct
Contact
Support
@OBShq
Terms
openSUSE Build Service is sponsored by
The Open Build Service is an
openSUSE project
.
Sign Up
Log In
Places
Places
All Projects
Status Monitor