Get-View to speed up things using regex

I was playing around with the get-view cmdlet and found something I didn’t know yet.

On the site there is an example

Get-View -ViewType VirtualMachine -Filter @{"Name" = "VM"}

Cool, let’s test that. Hey , why do I get 2 items returned?
I searched for a VM called “Test” and there returned 2 VM’s which are started with “Test”.

I didn’t know that, in the description of the filter object you see:

Specifies a hash of <name>-<value> pairs, where <name> represents the property value to test, and <value> represents a regex pattern the property must match. If more than one pair is present, all the patterns must match.

Ah…

So let’s throw some regex in then:

One way to tighten our patterns is to define a pattern that describes both the start and the end of the line using the special ^ (hat) and $ (dollar sign) metacharacters. Maybe not the best option but it works fine.

To do an exact match on “Test””we use : “^Test$

Now let’s start exact match on Test and do a reset of the VM:

(get-view -ViewType VirtualMachine -Filter @{"name"="^Test$"}).ResetVM()

Cool that works, I’m no regex expert, but the way mentioned above works, because the start..end of the line only exists of VM names.

It would be nicer to use a word boundary so you capture the word only and not the line.

So we can also use \bServername\b:


get-view -ViewType VirtualMachine -Filter @{"name"="\bTest\b"}

Both options work, but may come in handy for the future.

So what I learned, the filter object can be ‘regexed’ !

Logging Function v2.0

As I created an earlier post with a simple logging function in poweshell, a while ago I upgraded this with a specific Error, Success and info switch. This way I can put it with simple colors as output on screen to see if something goes wrong or good. And also logging will be better searchable.

 

$Logfile = "D:\LogFile.txt"

Function LogWrite{
&lt;#
<span style="font-size: 0.95em; line-height: 1.6em;">.SYNOPSIS This functions is used to generate a log file
</span><span style="font-size: 0.95em; line-height: 1.6em;">.DESCRIPTION Function creates output on screen depending on given switch and writes this with error code to the logfile
.PARAMETER $Logstring Location to the logfile
.PARAMETER $Error Switch to identify an error message
.PARAMETER $Success Switch to identify a success message
.PARAMETER $Info Switch to identify an info message
.EXAMPLE PS C:\&gt; Logwrite -error "This is an error"
</span> .INPUTS
System.String,System.Switch
.OUTPUTS
System.String
#&gt;

[CmdletBinding()]
[OutputType([System.String])]
Param(
[string]$Logstring,
[switch]$Error,
[switch]$Success,
[switch]$Info

)
try {
if ($Error){
$logstring = (Get-Date).ToString() + " ERROR: " + $logstring
Write-Host -ForegroundColor red $logstring
}
elseif ($Success){
$logstring = (Get-Date).ToString() + " SUCCESS: " + $logstring
Write-Host -ForegroundColor green $logstring
}
elseif ($Info){
$logstring = (Get-Date).ToString() + " INFO: " + $logstring
Write-Host $logstring
}
else {
$logstring = (Get-Date).ToString() + " INFO: " + $logstring
Write-Host $logstring
}
Add-content $Logfile -value $logstring
}
catch {
throw
}
}

#Example

logwrite -success "Success creating user: $user"
logwrite -error "Error creating user $user"
logwrite -info"Success quering user: $user"

PowerCLI: Show HBA Path Status *Updated*

Because our storage team was doing some upgrades lately they asked if I could the check the storage path status. We have 2 FC HBA’s per server which are connected to a seperate FC switch. Storage normally upgrades 1 path then we check to be sure before starting to upgrade the other path.

Got my inspiration from internet and credits go to them :
https://jfrmilner.wordpress.com/2011/08/27/checking-for-dead-paths-on-hbas-with-powercli/ * Used the output as a starting point where I want to go.
http://practical-admin.com/blog/powercli-show-hba-path-status/
* Pefect clean use of the get-view feature, only nasty part was the end

$result += "{0},{1},{2},{3},{4}" -f $view.Name.Split(".")[0], $hba, $active, $dead, $standby
}
}

ConvertFrom-Csv -Header "VMHost", "HBA", "Active", "Dead", "Standby" -InputObject $result | ft -AutoSize

Which I changed by creating a new object.

$row = "" | Select VMhost,HBA,Active,Dead
$row.VMhost = $view.Name.Split(".")[0]
$row.HBA = $hba
$row.Active = $active
$row.Dead = $dead
$result += $row
}
}
$result|ft -AutoSize

This object then can easily be exported to CSV/Table whatever you like. It also can be easy to for a wrap up like I made at the end.

$result |Measure-Object -Property Active,Dead -Sum|select property,sum|ft -AutoSize

Especially in a larger environment you don’t have to keep scrolling to see where it failed, just look at the summary to see if you need to scroll up 🙂


cls
function Check-DeadPaths{
<#
    .SYNOPSIS
        This function checks and reports path status
    .DESCRIPTION
         This function checks and reports path status
    .PARAMETER Outputfile
        Specify a output file, exported in CSV format.
    .EXAMPLE
        PS C:\> Check-DeadPaths -Outputfile "C:\temp\output.csv"
    .INPUTS
        System.String
    .OUTPUTS
        System.Collections.Hashtable
#>
[CmdletBinding(
    SupportsShouldProcess = $true, ConfirmImpact = "Low")]
    [OutputType([Hashtable])]
	param(
    [Parameter(Mandatory = $false)]
    [string]$Outputfile
	)
    BEGIN {
        try {
        }
        catch {
            Throw
        }
    }
	PROCESS {
        try {
		    if ($pscmdlet.ShouldProcess){
				$views = Get-View -ViewType "HostSystem" -Property Name,Config.StorageDevice
				$result = @()
				foreach ($view in $views | Sort-Object -Property Name) {
					Write-Host "Checking" $view.Name
					$view.Config.StorageDevice.ScsiTopology.Adapter|Where-Object{ $_.Adapter -like "*FibreChannelHba*" } | ForEach-Object{
						$active,$standby,$dead = 0
						$key = $_.Adapter
						$wwn = $view.Config.StorageDevice.HostBusAdapter|Where-Object{$_.key -eq $key}
						$_.Target | ForEach-Object{
							$_.Lun | ForEach-Object{
								$id = $_.ScsiLun
								$multipathInfo = $view.Config.StorageDevice.MultipathInfo.Lun | ?{ $_.Lun -eq $id }
								$active = ([ARRAY]($multipathInfo.Path | ?{ $_.PathState -like "active" })).Count
								$standby = ([ARRAY]($multipathInfo.Path | ?{ $_.PathState -like "standby" })).Count
								$dead = ([ARRAY]($multipathInfo.Path | ?{ $_.PathState -like "dead" })).Count
							}
						}
						$row = "" | Select VMhost,HBA,PortWorldWideName,NodeWorldWideName,Active,Dead
						$row.VMhost = $view.Name.Split(".")[0]
						$row.HBA = $_.Adapter.Split("-")[2]
						$row.PortWorldWideName = "{0:X}" -f $wwn.PortWorldWideName
						$row.NodeWorldWideName = "{0:X}" -f $wwn.NodeWorldWideName
						$row.Active = $active
						$row.Dead = $dead
						$result += $row
					}
				}
            }
		}
        catch {
            [String]$returnMessage = "Failed to create Kibana dashboard for environment $Environment.`r`nScriptname: " + $_.InvocationInfo.ScriptName + "`r`nLine: " + $_.InvocationInfo.ScriptLineNumber + "`r`nError: " + $_.Exception.Message
        }
$result|ft -AutoSize
Write-Host "Total Wrap up:"
$result |Measure-Object -Property Active,Dead -Sum|select property,sum|ft -AutoSize
if ($outputfile){
		$result|Export-Csv $outputfile -Confirm:$false -useculture -NoTypeInformation
	}
 
	END {
        try {
            return @{returnMessage = $returnMessage}
        }
        catch {
            Throw
        }
    }
}

vCenter 6 creating global roles with PowerCLI

While middle in the migration from a vCenter 5.1 environment to a vCenter 6.x environment I wanted to use the Global Roles so I don’t have to set them per vCenter anymore.

So how do I create those global roles?

Well the important thing is to connect to your vCenter (Connect-VIServer) using the administrator@vsphere.local user (or your SSO user if you configured a different one)

Because you login with the SSO user you can create the global roles by just using the New-VIRole command.

Example:
So in with the function below I tried to create a simple function with parameters -From and -To to simply recreate the roles from vCenter1 to vCenter2.
I make use of the logwrite function I posted earlier to spam some messages on screen and to a text file

Before:
– I expect you to be connected to both vCenters using the Connect-VIServer cmdlet.

function Migrate-VIrole{
	<#
		.SYNOPSIS
			Migrates the VCenter roles from one vCenter to another
		.DESCRIPTION
			A detailed description of the function.
		.PARAMETER  $From
			This is the vCenter to read from
		.PARAMETER  $To
			This is the vCenter to build the datacenter on
		.EXAMPLE
			PS C:\> Migrate-VIRole -From vCenter1 -To vCenter2
		.INPUTS
			System.String
		.OUTPUTS
			System.String
	#>
	[CmdletBinding()]
	[OutputType([System.String])]
	param(
		[Parameter(Position=1, Mandatory=$true)]
		[ValidateNotNull()]
		[System.String]
		$From,
		[Parameter(Position=2, Mandatory=$true)]
		[ValidateNotNull()]
		[System.String]
		$To
	)
	try{
	#Grabbing roles from an to in array
	$ArrRolesFrom = Get-VIRole -Server $From |?{$_.IsSystem -eq $False}
	$ArrRolesTo = Get-VIRole -Server $To |?{$_.IsSystem -eq $False}
	
	#Checking for existing roles
	foreach ($Role in $ArrRolesFrom){
		if($ArrRolesTo|where{$_.Name -like $role})
			{
		Logwrite -Error "$Role already exists on $To"
		logwrite -Info "Checking permissions for $role"
			[string[]]$PrivsRoleFrom = Get-VIPrivilege -Role (Get-VIRole -Name $Role -Server $From) |%{$_.id}
			[string[]]$PrivsRoleTo = Get-VIPrivilege -Role (Get-VIRole -Name $Role -Server $To) |%{$_.id}
				foreach ($Privilege in $PrivsRoleFrom){
					if ($PrivsRoleTo | where {$_ -Like $Privilege})
					{
					Logwrite -Error "$Privilege already exists on $role"
					}
					else
					{
						#Setting privileges
						Set-VIRole -Role (Get-VIRole -Name $Role -Server $To) -AddPrivilege (Get-VIPrivilege -Id $PrivsRoleFrom -Server $To)|Out-Null
						Logwrite -Success "Setting $privilege on $role"
					}
				}
			}
			else
			{
				#Creating new empty role
				New-VIrole -Name $Role -Server $To|Out-Null
				Logwrite -Success "Creating $Role on $To" 
				Logwrite -Info "Checking permissions for $role"
				[string[]]$PrivsRoleFrom = Get-VIPrivilege -Role (Get-VIRole -Name $Role -Server $From) |%{$_.id}
				[string[]]$PrivsRoleTo = Get-VIPrivilege -Role (Get-VIRole -Name $Role -Server $To) |%{$_.id}
				foreach ($Privilege in $PrivsRoleFrom)
				{
					if ($PrivsRoleTo|where {$_ -Like $Privilege})
					{
						Logwrite -Error "$Privilege already exists on $role"
					}
					else
					{
					#Setting privileges
					Set-VIRole -role (get-virole -Name $Role -Server $To) -AddPrivilege (get-viprivilege -id $PrivsRoleFrom -server $To)|Out-Null
					logwrite -success "Setting $privilege on $role"
					}
				}
			}
		}
	}
	catch 
	{
		throw
	}
}

AD authentication and Windows Passthrough VCSA appliance

While we are deploying a new vCenter 6.0 environment we planned to make a setup like this:

2 physical sites, on both sites we create a vCenter appliance and a vCenter Platform Service controller. Both the PSC are joined to the same (replication) domain.

Deployment worked perfect, logged in as SSO user, and configured the LDAP settings so we could login with our domain accounts.

Wow easy as hell, but hey a colleague mentions he cannot login while using the checkbox “use windows credentials”. We got an error

Problem:

You see a popup with the error:
Window session credentials cannot be used to log into this server. Enter a user name and password

Troubleshooting:

Well I tried a lot of VMware KB’s like :
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2050701

And also a few others but still with no real result.

Then I found out on a few different sites you need to join the appliances to your Active Directory Domain. That makes sense, searched for some guides and  you can do it by GUI but then only for the PSC’s this didn’t solve the problem.

For testing I started to deployed an embedded VCSA and configured that the way I did with external version. Joined the machine to the domain, tested this and wow that worked.

So somewhere in the communication flow between VC – PSC – AD something will go wrong. The AD connection should be good as it worked flawless with the embedded version (assumption) so the problem should be something between VC and PSC.

I remembered a note on a site to join ALL nodes from your vSphere environment to AD to make it work. But damn, why does the GUI only show the PSC’s ? Makes sense that only the PSC are connected to a domain and do the authentication.

A colleague then tied the knots and a found a command to join AD from command line. Hey I already saw it, but what if we also look on the vCenter server to see if we can join them too.

Wow the command is there.

/opt/likewise/bin/domainjoin-cli

So let’s try again :

  • Re-deploy 2 VC’s and 2 PSC
  • Login with SSH
  • Join PSC’s to domain, join VC’s to domain
  • /opt/likewise/bin/domainjoin-cli join <domain><domain admin user> 
  • Restart all servers
  • Login with SSH
  • Query DC to see if the join was succesfull
    /opt/likewise/bin/domainjoin-cli query
  • Configure SSO
  • Test …….BAM works!

Solution:

Join all nodes (vCenter servers & Platform Service Controllers) to the Active Directory Domain.

/opt/likewise/bin/domainjoin-cli join <domain><domain admin user>

 

Bonus : For troubleshooting I checked a lot of log files, here is a good list of log file locations:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2110014

vSphere 6.0’s certtool forgets template.cfg to create CSR

I was trying to generate a CSR. We decided to put the PSC as a subordinate CA in our environment. There are already a few good posts on the net which explain how to do this, so I followed the steps and started editing the

/usr/lib/vmware-vmca/share/config/certool.cfg 

this should be the template which will be used by Certool to create a CSR.

Let’s start certtool, like mentioned in the most internet posts, Choose 2 … Choose 1 create new cert, put it on a CSR checker fails. Strange let’s see…..huh company VMware, Location US. Strange this wasn’t in my template file, looks like it didn’t us it.

After a little search I found

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2129706

Hah so this is a known issue, tought I did something wrong, so I followed the steps in the document, created/copied the CFG I already created.

Used:

certool --initcsr --privkey=priv.key --pubkey=pub.key --csrfile=csr.csr --config=certool_acme.cfg

 

Well this worked perfect, while letting our PKI sign the certificate we couldn’t properly import the certificate. When troubleshooting this we noticed that our PKI administrator used some wrong templates which made the SubCA we requested an end entity CSR.

endpoint

After some mailing we suddenly received a properly signed certificate which was a SubCA signed one.

After following the original documentation guides we could easily install the certificate.

KDC has no support for encryption type (14)

This morning was like every morning, everything up and running and had good cup of coffee. All VMware components worked fine.

After that I went to a meeting, had some talk with colleagues and came back to my workplace. Mmm a colleague tells me that he can’t login to vCenter anymore, we have 2 seperate vCenters in 2 diffent datacenters. Somehow he couldn’t login in both vCenters. Strange, as they have very few components shared.

First thing I tought of was authentication, logged in with admin@vsphere.local user in webclient and checked SSO settings. When testing the connection I got an error.

The settings in SSO for LDAP are:

  • Reuse session (this was our setting and didn’t work anymore)
  • Password (this works with my own user account so AD should be ok?)
  • Anonymous (Prohibited error)

So somehow, when I use the option password the authentication to AD is fine, but with reuse session I get an error.

Let’s check the SSO logging :

C:\Program Files\VMware\Infrastructure\SSOServer\logs\imsTrace.log

The errors I found where like:

Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: KDC has no support for encryption type (14))]
Caused by: GSSException: No valid credentials provided (Mechanism level: KDC has no support for encryption type (14))

So let’s ask the security team if happened something in the time I had a meeting.

“Uh yes we upgraded the functional domain level from Windows2003 to Windows2008”

*To be sure I rebooted the vCenters in the meanwhile

Ah so we might have a trigger which caused this issue. When searching the internet I find a similar error messages:

http://visualplanet.org/blog/?p=20

http://www.winsysadminblog.com/2013/02/fixing-kdc-authentication-problems-when-upgrading-your-domain-and-forest-functional-level-from-2003-to-2008-r2/

I adviced the security team to restart the Domain Controllers or restart the service. Because the impact of restarting the service is unknown we decided to restart the domain controllers later that evening.

As a workaround I used the SSO “Password” option with a service account to let people login and don’t disturb the backup process.

The day after domain controllers where rebooted, I changed the authentication type back to “Reuse session” and tested the connection….BINGO! Worked again.

The problem was probably like described in the posts, there isn’t much public information on this I could find. Our security team even verified with Microsoft for impact and stuff, but never had this mentioned.

Well took me a few minutes of my day :S

 

HP DL380Gen9 installing on SD Card using UEFI

Intro

Recently we purchased a few HP DL380 GEN9 servers to test our installations for ESXi, Windows and Linux.

Normally we install our ESXi environment on a SD Card, which worked fine on the G7 and Gen8 series.

Now the new Gen9 series start default with UEFI boot, tried the normal install, but huh that failed.

Switched to “Legacy Boot” which worked fine, I still wanted to see if we can get it to work with UEFI so started browsing the internet and I found the following posts which already refer to this issue.

The problem seems to be the PXE boot environment which isn’t UEFI Compatible.

Bootloader / PXE

The problem seems to be the bootloader, originally the “Legacy bootloader” is configured default on many PXE boot environments, so we need to prepare them for the future and support UEFI also.

PXE Bootloader

PXE Bootloader

See the article below which explains how it works and what needs to be done on Windows/RedHat PXE boot environments.

http://h20564.www2.hpe.com/hpsc/doc/public/display?docId=c04565930

VMWare says you cannot use it now according to this article. (It will work when you install and use local disks etc. The only thing that isn’t supported is Networkprovisioning/PXE boot)

https://pubs.vmware.com/vsphere-55/index.jsp?topic=%2Fcom.vmware.vsphere.install.doc%2FGUID-DEB8086A-306B-4239-BF76-E354679202FC.html

You cannot provision EFI hosts with Auto Deploy unless you switch the EFI system to BIOS compatibility mode.

Network boot of VMware ESXi or provisioning with VMware Auto Deploy requires the legacy BIOS firmware and is not available with UEFI.

This probably will probably be fixed in newer versions of the Auto Deploy environment.

SD Card problem?

There also can be problems when using the SD-CARD on a HP DL380 Gen9 which can be solved by 2 solutions

1) ILO Bug, according to this post there has been a ILO bug which causes the server not detecting the SD-CARD

http://v-strange.de/index.php/hardware/190-installing-vmware-esxi-on-sd-card-on-hp-proliant-gen9

The scope of servers which have this problem could be any HP ProLiant Gen8 or ProLiant Gen9 server with HP Integrated Lights-Out 4 (iLO 4) Firmware Version 1.51 through 2.02 and a Secure Digital (SD) card installed.

This can be solved by an ILO firmware update :

http://h20566.www2.hp.com/hpsc/doc/public/display?docId=c04555714&jumpid=em_alerts_us-us_Feb15_xbu_all_all_2311152_208935_proliantserversstorageoptionsandaccessories_recommended_006_0

If your firmware is up to date you should get to the next step

2) According to this post you should disable USB3.0, after a little search I found the article below which describes how to turn it off.

Use this option is set the USB 3.0 Mode.

  1. From the System Utilities screen, select System ConfigurationBIOS/Platform Configuration (RBSU)System OptionsUSB OptionsUSB 3.0 Mode and press Enter.

  2. Select a setting and press Enter:

    • Auto (default)—USB 3.0-capable devices operate at USB 2.0 speeds in the pre-boot environment and during boot. When a USB 3.0 capable OS USB driver loads, USB 3.0 devices transition to USB 3.0 speeds. This mode is compatible with operating systems that do not support USB 3.0 while still allowing USB 3.0 devices to operate at USB 3.0 speeds with modern operating systems.

    • Enabled—USB 3.0-capable devices operate at USB 3.0 speeds at all times (including the pre-boot environment) when in UEFI Boot Mode. Do not use this mode with operating systems that do not support USB 3.0. When operating in Legacy BIOS Boot Mode, the USB 3.0 ports do not function in the pre-boot environment and are not bootable.

    • Disabled—USB 3.0-capable devices function at USB 2.0 speeds at all times.

  3. Press F10 to save your selection.

What I found remarkable is the fact that you shouldn’t enable it unless you know for sure the OS supports USB3.0

Reference : http://h17007.www1.hp.com/docs/iss/proliant_uefi/s_Accessing_USB_Options_201310290123.html#s_USB_30_mode

I keep this post up to date if I have more information.

 

Everything  I tried so far didn’t let me boot from Network in UEFI mode or doesn’t see the SD-Card even with a manual install.

PowerCLI Start and stop VM using get-view

Intro

When trying to stop and start a bunch of servers I noticed that using the normal commands it took a while to shutdown all VM’s mostly because the tasks are started after each other. When You select the bunch of VM’s in vCenter, right click and select shutdown it will almost instantly start the requests tasks.

get-vm | shutdown-vmguest -confirm:$false

Fine-Tuning

I have test set of 5 CentOS Virtual machines which are all named with CENTOS in the name.

Using the measure-command command I measured the time it took to run the commands.

Command TotalSeconds
measure-command {(get-vm -name centos*)|Start-VM -Confirm:$false -RunAsync}|select totalseconds|ft -a 3,32
measure-command {(get-vm -name centos*)|Shutdown-VMGuest -Confirm:$false}|select totalseconds|ft -a 6,18

Not too bad you would say using 5 VM’s

But let’s extrapolate this for more machines

Amount VM Start(Sec) Stop(Sec) Start(Min) Stop(Min)
1,00 0,66 1,24 0,01 0,02
50,00 33,20 61,80 0,55 1,03
100,00 66,40 123,60 1,11 2,06
500,00 332,00 618,00 5,53 10,30
1000,00 664,00 1236,00 11,07 20,60

As you can see this is gaining a lot more time.

Now I build 2 commands which do exact the same thing only using the get-view commands.

measure-command {(Get-View -ViewType VirtualMachine -property Name -Filter @{“Name” = “CentOS*”}).ShutdownGuest()} |select totalseconds|ft -a 2,33
measure-command {(Get-View -ViewType VirtualMachine -property Name -Filter @{“Name” = “CentOS*”}).PowerOnVM_Task($null)} |select totalseconds|ft -a 1,64

When we extrapolate this one we see a lot of change

Aantal VM Start(Sec) Stop(Sec) Start(Min) Stop(Min)
1,00 0,2 0,04 0,00 0,00
50,00 10,00 2,00 0,17 0,03
100,00 20,00 4,00 0,33 0,07
500,00 100,00 20,00 1,67 0,33
1000,00 200,00 40,00 3,33 0,67

 Conclusion

Starting VM’s from 11 to 3 minutes and stopping from 20,60 minutes to 0,67 minutes. That’s an incredible time win when doing this with a lot of servers.

As you can see if you have a small set of VM’s or time enough (batch jobs) it might be easier to use the common cmdlets. Those will do the actions you want plus it’s more readable for colleague’s who can understand Powershell but using the VI APIs might be a bridge too far.

I haven’t been able to test it in real practice at the moment, so the extrapolation is based on the amount of time it took to power on and shutdown 5 Test VM’s.

 

VCSA API and plugin commands

When I first logged in to the appliance I noticed there is a complete list of API’s and plugins.

The complete list and description can be found here:

Plug-Ins in the vCenter Server Appliance Shell
API Commands in the vCenter Server Appliance Shell

So let’s play a bit :

Command> help pi com.vmware.vimtop
top
Display vSphere processes information.

Wonder if that will display a top view, let’s hit it !

Command> pi com.vmware.vimtop

vimtop

Nice, but it also possible just to hit “vimtop” from command.

What more do we have :

Looks that besides the root account there will be a user

Command> com.vmware.appliance.version1.localaccounts.user.list
Config:
Configuration:
Username: root
Status: enabled
Role: superAdmin
Passwordstatus: valid
Fullname: root
Email:
Configuration:
Username: postgres
Status: disabled
Role:
Passwordstatus: notset
Fullname:
Email:

Mm strange, this doesn’t look like the complete list, when I look to the users from a shell environment I see the users below:

localhost:~ # cut -d: -f1 /etc/passwd
bin
daemon
dhcpd
haldaemon
ldap
mail
man
messagebus
nobody
ntp
polkituser
postfix
root
sshd
stunnel
uuidd
wwwrun
nginx
tcserver
cm
netdumper
vapiEndpoint
postgres
mbcs
eam
deploy
vdcs
vpx-workflow
vsm
vsphere-client
perfcharts
vpostgres

 

Like you probably noticed in the normal mode you only can do some API and plugin calls. But you can switch to shell.

Switching to shell

Connected to service
* List APIs: "help api list"
* List Plugins: "help pi list"
* Enable BASH access: "shell.set --enabled True"
* Launch BASH: "shell"

Standard shell access is disabled, this can be seen by using shell.get

Command> shell.get
Config:
Enabled: False
Timeout: 0

Now let’s enable the access :

Command> shell.set --enabled True
Command> shell.get
Config:
Enabled: True
Timeout: 3597

Once we entered shell, we can use some basic linux commands


localhost:/ # df -h
Filesystem                            Size  Used Avail Use% Mounted on
/dev/sda3                              11G  3.7G  6.6G  36% /
udev                                  4.0G  164K  4.0G   1% /dev
tmpfs                                 4.0G   32K  4.0G   1% /dev/shm
/dev/sda1                             128M   38M   84M  31% /boot
/dev/mapper/core_vg-core               25G  173M   24G   1% /storage/core
/dev/mapper/log_vg-log                9.9G  1.1G  8.3G  12% /storage/log
/dev/mapper/db_vg-db                  9.9G  199M  9.2G   3% /storage/db
/dev/mapper/dblog_vg-dblog            5.0G  171M  4.5G   4% /storage/dblog
/dev/mapper/seat_vg-seat              9.9G  188M  9.2G   2% /storage/seat
/dev/mapper/netdump_vg-netdump       1001M   18M  932M   2% /storage/netdump
/dev/mapper/autodeploy_vg-autodeploy  9.9G  151M  9.2G   2% /storage/autodeploy
/dev/mapper/invsvc_vg-invsvc          5.0G  157M  4.6G   4% /storage/invsvc
localhost:/ # mount
/dev/sda3 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,mode=1777)
devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
/dev/sda1 on /boot type ext3 (rw,noexec,nosuid,nodev,noacl)
/dev/mapper/core_vg-core on /storage/core type ext3 (rw)
/dev/mapper/log_vg-log on /storage/log type ext3 (rw)
/dev/mapper/db_vg-db on /storage/db type ext3 (rw,noatime,nodiratime)
/dev/mapper/dblog_vg-dblog on /storage/dblog type ext3 (rw,noatime,nodiratime)
/dev/mapper/seat_vg-seat on /storage/seat type ext3 (rw,noatime,nodiratime)
/dev/mapper/netdump_vg-netdump on /storage/netdump type ext3 (rw)
/dev/mapper/autodeploy_vg-autodeploy on /storage/autodeploy type ext3 (rw)
/dev/mapper/invsvc_vg-invsvc on /storage/invsvc type ext3 (rw,noatime,nodiratime)
none on /var/lib/ntp/proc type proc (ro,nosuid,nodev)

localhost:/var/log # tail -f messages.log
2015-04-28T12:51:30.980333+00:00 localhost kernel: [1652125.714038] IPfilter Dropped: IN=eth0 OUT= MAC=01:00:5e:00:00:01:00:22:bd:37:fc:00:08:00 SRC=145.70.12.252 DST=224.0.0.1 LEN=36 TOS=0x00 PREC=0xC0 TTL=1 ID=61046 PROTO=2
2015-04-28T12:51:42.931041+00:00 localhost su: (to vpostgres) root on none
2015-04-28T12:51:50.860391+00:00 localhost kernel: [1652145.593672] IPfilter Dropped: IN=eth0 OUT= MAC=01:00:5e:00:00:01:00:22:bd:37:fc:00:08:00 SRC=145.70.12.252 DST=224.0.0.1 LEN=36 TOS=0x00 PREC=0xC0 TTL=1 ID=5260 PROTO=2
2015-04-28T12:51:51.044384+00:00 localhost kernel: [1652145.777641] IPfilter Dropped: IN=eth0 OUT= MAC=01:00:5e:00:00:01:00:22:bd:37:fc:00:08:00 SRC=145.70.12.252 DST=224.0.0.1 LEN=36 TOS=0x00 PREC=0xC0 TTL=1 ID=5416 PROTO=2
2015-04-28T12:52:15.344457+00:00 localhost su: (to vpostgres) roo

Something else I noticed when enabling shell access the timeout starts counting down. As soon as I keep hitting

Command> shell.get
Config:
Enabled: True
Timeout: 3597
Command> shell.get
Config:
Enabled: True
Timeout: 3596
Command> shell.get
Config:
Enabled: True
Timeout: 3596
Command> shell.get
Config:
Enabled: True
Timeout: 3596
Command> shell.get
Config:
Enabled: True
Timeout: 3595

You see the timeout decrease I guess it’s in seconds so 3600 seconds = 60 minutes = 1 hour.

It’s possible to change the timeout using the shell.set command


Command> shell.set -help
shell.set: error: unrecognized arguments: -help
Command> shell.set --help

Usage:
shell.set [--help/-h] --enabled BOOL --timeout INT
Description:
Set enabled state of BASH, that is, access to BASH from
within the controlled CLI.
Input Arguments:
--enabled BOOL
Enabled can be set to true or false
--timeout INT
The timeout (in seconds) specifies how long you enable the
Shell access.

1 2 3 5