The First Month With PowerShell

About a month ago, I decided that it was time.  I had written(i.e. copy and pasted) a couple of PowerShell scripts in the past to accomplish something but that was no longer good enough.  To be completely honest, I don’t even remember now what gave me the itch to learn PowerShell a month ago but here I am.  Sifting through many resources, watching videos, reading blog posts, rediscovering Twitter.  It’s been a very interesting ride so far.  What have I learned in the last month?  I still know nothing…well, next to nothing now.

My journey began with a trip to Google.  “learn powershell”  Low and behold, the second link is the Microsoft Virtual Academy.  Having perused this wondrous resource before, I knew it would have videos and some great content so I started there.  Getting Started with PowerShell 3.0 Jump Start.  I burned though that in about a day and wanted more so I searched for other PowerShell videos on the MVA.  Wow!  There are a ton on there so away I went.  I have to say, if you are behind like I am, that is probably one of the best places to start.  Fantastic content.

Let’s go over some things that I have learned so far.  First of all, the help system in PowerShell is absolutely fantastic.  Get-Help will be your best friend.  Second, the syntax of the cmdlets is very easy to remember.  Verb-Noun.  It couldn’t be more simple and easy to remember.  So for example you want get information about an AD user.  Hmm, sure enough.  Get-ADUser is the name the cmdlet.

Since I had to take two programming courses in college (why I chose C++ is beyond me), some common concepts in programming I already understood to a degree.  Things such as loops, and operators.  Those courses were finally good for something.  Become familiar with how these work as soon as possible.  They are invaluable to understand and be able to use.

One of the coolest things in my opinion about PowerShell happens to be, at least for myself, the hardest concept to really wrap your head around.  Everything is an object.  Well what does that mean?  The easiest way I know how to explain it is with an example.  Let’s pick something really easy like Get-Date.  Let’s just type that in and see what we get.

Get-Date

Well, that gave us what you probably thought.
Thursday, December 03, 2015 10:49:38 PM

I think I did this next part by accident and figured out what it really meant that everything was an object. Let’s assign that to a variable and then display it’s contents.

$date = Get-Date
$date

Ok, so we saved the current date and time into a variable. So what. Here is the cool part. Each object in PowerShell has some functions that you can use with them. The next best thing to Get-Help is something called Get-Member. If you pipe an object into Get-Member, it gives you all kinds of information about what methods can be used on that object as well as that objects properties. At this point in my learning, it’s the properties that interested me. Let’s take that variable that we saved the current date and time to.

$date | Get-Member

The first thing that I want you to really take notice to besides that wall of text that just floated by is the very first line of output. “TypeName: System.DateTime”. The variable that we saved the current date to is actually saving a DateTime object. Ok, so looking through some of the properties I see “DayOfWeek”. So what does that do? That looks interesting. Let’s try it.

$date.DayOfWeek

That returned just what it sounds like. The day of the week that you saved as part of the current date time from Get-Date. Go ahead and test some of the others. Pretty cool right? Oh it gets better.

Lets say that you want to see the services on your PC.

Get-Service

OK, but you really only wanted to see the running services. Notice the headers of each column. Those are properties that you can filter with. This is when you utilize the pipeline to pass the output of one cmdlet as the input of another cmdlet.

Get-Service | Where-Object Status -eq 'Running'

Great, but I still see a status column which is now not needed. I know the status already because I told it to only give me the services that are running. Also, that name column contains some funny things. I just want the “English” name of the services. Remember where I said previously that objects have properties. Well the column headers are some properties. And we just want the “DisplayName”. We could save that to a variable and see that property from the variable but that is two separate steps. The other way(which again, I think I figured this out by accident) is to wrap the command in parentheses and run methods or view properties that way.

(Get-Service | Where-Object Status -eq 'Running').DisplayName

I have to say, I use this like crazy now if I want to see just one property of something(which is usually the).name.

These are just barely scratching the surface of the capabilities of PowerShell. As I said in the last post, if you are a System Administrator and not already using PowerShell, do yourself a favor and get cracking. Start with the MVA jump start video and go to town. You’ll be glad that you did.

As always, any tips, comments, feedback, or questions are welcome. Thanks for dropping by and I’ll see you soon.

Advertisements
The First Month With PowerShell

Let’s Talk Some Powershell

It has been roughly 2 weeks since I began the journey into PowerShell.  In that short amount of time, I have learned a ton.  I have to say, if you are a system admin and you don’t already know PowerShell, get on it.  It is such a time saver.  Let’s take one example in particular.  Just think about for a moment one of the most time consuming tasks you will ever do.  Server maintenance.  With PowerShell and Pester, no more will you painstakingly remote into every server to verify that the correct services are running.  No more will you open a web browser to verify that the site loads.  With a little bit of up front work, you will run a single command and check every server you have in minutes.  Yes, I said minutes.  I’ll give you a moment to let that sink in.

A little back story.  In my endless search of PowerShell information, I came across something called Pester.  What is Pester you ask?  Pester was designed to give PowerShell developers a way to do test driven development.  What it does is evaluate the result of a command or set of commands.  The evaluation will either pass or fail and give a visual representation of that pass or fail as either green, or red.  Very interesting.  In doing some more searching on Pester itself, I came across a blog post from last week showing how they were using Pester to do operational validation on their SQL server.  Even more interesting.  Seeing as how I just did server maintenance the weekend before that and it takes roughly an hour+ to verify everything is good after rebooting all the servers, the pain was still fresh in my memory.

First thing was first, get Pester installed.  Fortunately, I’m running Windows 10 now on my work laptop and Pester is preinstalled.  It was a sign.  Opened up PowerShell, created a new directory for my validation test and created my first Pester fixture.

New-Fixture -Name server

This will create two new files:
server.ps1
server.Tests.ps1

If you open up server.ps1, you’ll notice that all that is there is an empty function named “server”.  This file must remain in the same directory as the server.Tests.ps1 file otherwise when you go to run your pester tests, it will throw an error.  Found this out the hard way.  Best thing to do is try to ignore the fact that there is an extra file.  It isn’t needed for our purpose.

Opening up the server.Tests.ps1 file, you will see the following.

$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path).Replace(".Tests.", ".")
. "$here\$sut"

Describe "server" {
It "does something useful" {
$true | Should Be $false
}
}

Always leave those first four lines alone.  We are concerned with the rest of it to configure our tests.  Just to see what happens, from within the directory containing our test, let’s test it.

Invoke-Pester

You will immediately see a block of the dreaded PowerShell red text.  Let’s take a look at what it actually is telling us compared to what the file had in it line by line.

The first thing Pester tells us when we run our test is “Describing server”.

Describe "server" {

Ok, so now we know where that comes from in the test file.

The next thing Pester tells us is “does something useful”.

It "does something useful" {

So far so good.  The rest of what Pester is telling us is that it was expecting false but got true.  This is explained here:

$true | Should Be $false

With me so far?  This is where the fun part begins.  Let’s just say that we want to verify that BITS is running on our server.  This is what we would change in our test file.

It "The BITS service should be running." {
(Invoke-Command -ComputerName server {Get-Service -Name bits}).status |
Should Be 'Running'
}

Now, when you “Invoke-Pester”, it will show a pretty green line of success.  A couple of things to take note of.  I placed the entire command in () because we needed to pipe the status of the object returned by the command(which in this case was a service object, we just as easily could have checked the .name of a get-aduser object).  The last line reads just like it says.  The status of the bits service “Should Be ‘Running'”.  We are telling Pester what to verify.

I have created all of my base tests for services as well as port connection for my servers.  A good example of a port connection test is for a file server.  You know that it will need to answer requests via SMB.  So add another test to the test file for that server.  Make sure you add the -InformationalLeve Quiet so it only returns $true or $false.

It "Should accept connections on port 445(SMB)." {
Test-NetConnection -ComputerName server -Port 445 -InformationalLevel Quiet |
Should Be $true
}

I think you get the idea now.  Any kind of information you can grab with PowerShell, you can verify.  For example, this could be a basic test file for a SQL server.


$ComputerName = 'SQLServer'
$Session = New-PSSession -ComputerName $ComputerName

Describe "SQLServer" {
It "Should be listening on port 1433(SQL Server)" {
Test-NetConnection -ComputerName $ComputerName -Port 1433 -InformationLevel Quiet |
Should be $true
}
It "Is SQL Server running?" {
(Invoke-Command -Session $Session {get-service -name MSSQLSERVER}).status |
Should be 'Running'
}
It "Is SQL Server Agent Service running?" {
(Invoke-Command -Session $Session {get-service -name SQLSERVERAGENT}).status |
Should be 'Running'
}
}

Remove-PSSession -Session $Session

One thing I’ve learned in the very short time learning/using PowerShell.  Create a variable for anything you will use more than once in a script.  Also, if you need to run multiple commands on a single remote machine, create a PSSession for it.  The commands will run faster.  It doesn’t seem like much of a gain on these small scripts, but if you have a very large test file with tons of commands being sent to the remote server, the speed gains add up.  Especially if you are going to run tests on 10s to 100s of servers(see below).  Also, create a separate Pester fixture for each server.  This way you can test one server if you need to.

Invoke-Pester -TestName server

This will run Pester on just that one test file.  If you “Invoke-Pester” without any arguments, it will run every test file in the current directory.

The last little thing I will leave you is this.  Let’s say that you want to add many services to test with Pester.  I wrote a small script that will basically generate the test code for Pester for services.  It dumps it to a test.ps1 file which you can then manually copy and paste into your actual test file.  This could probably be better to be more automated, but as I haven’t been using this very long, it was a good start and did save me a ton of time.

$Services = get-service -DisplayName sql* -ComputerName server | where status -eq 'running'

Remove-Item -Path "C:\Scripts\test.ps1" -Force

foreach ($S in $Services){

$DisplayName = $S.DisplayName
$Name = $S.Name

$test = "It `"Is the $DisplayName service running?`" {
(Invoke-Command -Session `$Session {Get-Service -Name $Name}).status |
Should Be 'Running'
}"

$test | Out-File -FilePath "C:\Scripts\test.ps1" -Append

}

As always, any tips, comments, feedback, or questions are welcome.  Thanks for dropping by and I’ll see you soon.

Let’s Talk Some Powershell

Thinking about AD OU Design…

I don’t consider myself to be an OCD person.  I really don’t.  That being said, I can be pretty OCD when it comes to OU design in Active Directory.  How you layout your OUs can cause some pretty big headaches later if you don’t think about the future.  Throughout this article, I will be sharing my suggestions to help you design your OU structure.

When I first started out, I had no idea what I was doing.  I didn’t understand group policy, how it was applied, and why you would ever want to create OUs to put your stuff in.  There is already a users folder.  There is already a computers folder.  Those are fine right?  Well, probably not.  Do you want to apply a group policy to only some users or computers and not others where the settings are not preferences?  If you do, then it’s time to start creating some OUs.  Do you want all of your users in one folder?  Do you want all of your computers in one folder?  If you answered no to the last two questions, then guess what?  Time to create some OUs.

One great way to look at this is to think about the policies you want to apply and design your OU structure around that.  But wait, don’t just think about your policies.  Think about your security groups and distribution groups as well.  If you are like me, you want the naming to be consistent for everything and make sense.  Here is a real world example that will hopefully help explain this concept.  Before presenting that, please understand one thing.  I am all about functional GPOs and not monolithic GPOs.

My current network has offices in 3 locations and some remote workers.  The departments are not in a single location.  We have folks in the operations department in 2 of our offices.  We have folks in our sales department in all three locations and remote.  We have network shares dedicated for each department as well as network shares that all departments have access to.  See where I’m going with this?  I need a policy for shared drive mappings.  That’s an OU.  I need a policy for the departmental drive mappings.  More OUs.  Without really going into too much detail, I also have different printers in each office and each printer is only used by certain departments.   Well there’s a few more policies.

Let’s just back up a bit and draw a picture how we would probably want to lay this out.  We have some policies that will be applied to all departments, but some policies that are department specific so we would want a parent OU for the departments with child OUs of that underneath for each department.

Basic OU Layout
So from there, let’s look at all those policies we talked about.  Drive mappings for all departments, drive mappings for each department, corporate wide desktop background, and some printer shares.  It could look something like this.

Basic GPO Layout

So you can see why good OU design is important and some things to think about when setting it up for you domain.  As far as the naming, as I said previously, I prefer naming to be consistent across the board.  I’ll give you two guesses as to what the distribution group and security group names are for each department.  Be very mindful of what you name things in AD.  You want it to make sense to you and be as descriptive as possible without a really long name.   The AD objects have a description field.  Use it to your advantage.  Make your AD self documenting.  You will thank yourself in two years when you run across that random distribution group created for that one thing you don’t remember and the description field is filled in telling your future self why it was created.

Hopefully this article has been helpful to you and given you a good place to start when thinking about how you want to layout the OUs in your AD.

As always, any tips, comments, feedback, or questions are welcome.  Thanks for dropping by and I’ll see you soon.

Thinking about AD OU Design…

Machine Imaging With Fog – Usage Walkthrough

In the last article, I went over how to setup your very own Fog imaging server from scratch.  In this next installment, I’ll walk you through getting your first image setup and saved.

First thing we’re going to do is make one tweak to the settings so log back into the Fog web management.  Once we are logged in, click on on the blue question mark from the top menu.  This is where we access the Fog configuration.

System Settings Step 1

From the menu on the left, select “FOG Settings”.

System Settings Step 2

You’ll see a list of different configuration areas that do different things.  What we are going to focus on is under “General Settings”.  It is the 9th item from the bottom.  Click that to expand that configuration area.  The one setting we will be changing is the “FOG_HOST_LOOKUP” setting.  We will be disabling it.

System Settings Step 3

What that does, it when you click on “List All Hosts” under the host section, it will try to ping every one you have.  This sounds great it theory until you have 40 machines inventoried on your Fog server with some that are offline.  It will make the system nearly unusable.  So we disable it.

The next step is to create in the image in the system.  Click on the icon at the top that looks kind of like a picture frame.

Image Creation Step 1

This will take us into the image management section of Fog.  From here, you will select “Create New Image” from the main menu.

Image Creation Step 2

There are some basic things that we need to tell the system about our image.  We have to give it a name.  Naming convention is really up to you.  Name your images so that they will make sense to you.  Also I highly suggest you fill in a description so a year from now you know what that image was for.  Since we have a very basic setup, “Storage Group” will remain the default.  The same goes for “Image Path”.  I will generally not change that either.  Be sure to select the correct operating system.

Image Creation Step 3

The last option needs a little bit of explanation.  “Single Disk – Resizable – (1)” is just how it sounds with one caveat.  It only allows for one partition.  If you are installing Windows from scratch to be able to sysprep it, you will generally be OK.  Where this will not work, is with an OEM factory install.  They typically will do multiple partitions to include a restore partition and some other random partitions.  If you are wanting to image one of these machines for backup purposes or to do a hard drive swap, you will need to select one of the other options.  I typically will do “Multiple Partition Image – Single Disk (Not Resizable) – (2)” just to cover all bases.  If you have a machine with multiple drives, you can select the “Multiple Partition Image – Multiple Disks (Not Resizable) – (3)”. Why this is important to go over is the “resizable” or “not resizable”.  For the first option, you can restore the image to a disk that was smaller than the original as long as it is large enough to hold the data from the original image.  The other options do not allow for this and must be restored on an identical drive from the original.  Where this could bite you is if you want to replace someone’s hard drive with a SSD.  The drive geometry on them is slightly different than the traditional mechanical drives so it is technically smaller, hence the need to be able to resize the partition on restore.  The last option, I do not have a use case for.  I have never used it and don’t know of any reasons why you would.

Image Type

Now that we have that handled, it’s time to register our first host with the server.  To do this, you will want to boot the machine from the network that you want to image(or in this case, the machine with the master image that we want to save).  You will be presented with a menu once it boots and what we want is a quick inventory just to get it registered.

Host Registration

Once selected it will scroll through some information that is pulls from the BIOS that it will send to the Fog server.  Once this is finished, it will reboot.  Be sure to power off the machine before it starts to boot into Windows.  Assuming you sysprepped it, you don’t want it booting until the master image has been saved.

Now that your machine is registered, we need to update it’s information in Fog with some additional details.  From the top menu, click on host management.

Host Management Step 1

From the left menu, select “List All Hosts”.  Once you have multiple hosts, you can just as easily do a search.  The only issue with this is when you do a quick registration, it names the record the MAC address of the machine so not really that easy to search for.  What I typically do is list all hosts and sort by image and find the machine that doesn’t have an image assigned yet.  You can click on the name of the host or the edit button to go into the properties of the host.

Host Management Edit 1

From here, you will want to change the host name to something meaningful.  I will name the host in Fog the same as the PC name the machine will have once it is joined to the domain just so everything matches.  You can optionally put in a description or the product key if you want.  I leave all the fields default as I do not use this system for inventory, only imaging.  The important part of this step is to tell Fog what image is to be used for this host.  From the drop down, select the image that you created that will be used for this machine.  Once you are done, click on the update button to save your changes.

Host Management Edit 2

Now that Fog knows what image to use, click on “Basic Tasks” from the left menu for this host.

Host Imageing Step 1

This will bring up the basic tasks section for the host where we tell the system what we want to do with it.  Since we will be saving a brand new image from the master machine, we will select upload.

Host Imageing Step 2

We will want to schedule instant deployment and then click on the create upload task button.

Host Imageing Step 3

Once the create button is clicked, it will give us a confirmation message.  From here, click on the task management icon at the top to list the scheduled and active tasks.

Host Imageing Step 4

Now, you can go back to the machine you are wanting to image(again, in this case, saving an image from it), power up the machine and boot to the network again.  It will automatically start the image process as the Fog server determines what machine it is by MAC address.  Why this is important is you could image multiple machines with different images and the server can keep track of it and send the correct image to the correct machine.

In the Fog management console, progress will update under task management as the process completes.  You can check percentages and ETAs by mousing over the progress bar.

Image Progress

On the machine being imaged, you will also see progress displayed.

Image Progress 2

Once the upload completes, you are ready to register more hosts and download the image to them.  The process for that is the same as the upload but instead of selecting “Upload” from basics tasks, you will select “Download”.  All other steps are the same.

We have now gone through the installation and setup of your new Fog server, and walked through saving your first image.  You are now in a great spot for all the new machines that you need to deploy and will have it done in no time now.

Please feel free to leave any comments, suggestions, and criticisms.  Thanks for visiting and I’ll see you soon.

Machine Imaging With Fog – Usage Walkthrough

Machine Imaging With Fog – Setup

One of the best things you can probably do to help yourself as a System Administrator in the SMB space it to implement some way to do machine imaging.  This will save you hours whenever you have to order a new machine and set it up or if you want to wipe a badly infected machine.  Through the years, I have used a handful of products to accomplish this.  A couple of years ago, I came across a fantastic open source project that for me has been the best I’ve used.  Fog is an imaging solution that utilizes PXE to network boot the machines and image them over your network.  This will be the first article of a series on the installation and use of Fog.

First thing is first.  There are a few assumptions that I am making.

  1. You only have a single subnet you will be servicing.
  2. The machines you will be imaging are located on your live network.
  3. A local windows server is handling DHCP(typically the domain controller).

I will be using Ubuntu Server 14.04.  Install Ubuntu using all default options.  You do not need to install any additional packages during the OS install(I typically install Open SSH just for ease of administration).

Once the server reboots, login using the account you setup during the installation process.  The first thing we need to do with our new server is to give it a static IP address.  I personally use vim but any editor will get the job done to update the configuration file.  Type the following command into the command prompt to open the interfaces configuration file using vim.  This will ask you for your password for security reasons.

sudo vim /etc/network/interfaces

With a fresh installation of Ubuntu server, the interfaces configuration file will look like this:

# This file describes the network interfaces available on your system.
# and how to activate them.  for more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp

You will make your changes to the primary network interface section.  To make changes to the file in vim, press “i” to enter into insert mode.  When finished editing, press the Esc key to exit insert mode.  Press “:” to enter command mode and type “wq” and press enter.  This will (w)rite the file and (q)uit vim.  Keep in mind to use information that is applicable for your own network.  Below are the setting in my lab.

# This file describes the network interfaces available on your system.
# and how to activate them.  for more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
address 10.0.100.20
netmask 255.255.255.0
network 10.0.100.1
broadcast 10.0.100.255
gateway 10.0.100.1

# DNS Information
dns-nameservers 10.0.100.10
dns-search ad.testlab.com

While it isn’t necessary to add the comment “# DNS Information”, I like to do that to keep everything documented as I go and make it as obvious as possible what something is.

After saving the file, you will need to restart networking for the changes to take effect.  Issuing the following command should accomplish this.

sudo /etc/init.d/networking restart

Verify that you are able to ping out to the internet.  I will typically try to ping http://www.google.com as a test.  Once verified, we want to update the server.  Run the following to accomplish this.

sudo apt-get update
sudo apt-get upgrade

Now that we are all up to date at the OS level it’s time to download and install Fog.  Run the following to download, unzip, and start the installer.

sudo wget http://downloads.sourceforge.net/freeghost/fog_1.2.0.tar.gz
sudo tar -xvzf fog_1.2.0.tar.gz
cd fog_1.2.0/bin
sudo ./installfog.sh

Since we are using Ubuntu server, we will keep the default.

Fog Step 1

Again, we will be using the default option to install a “Normal Server”.

Step 2

Verify that the IP address to be used is the address the you assigned to your fog server previously.  We will not be using this server for DHCP, so be sure to answer no to the questions pertaining to DHCP.  We do not want to change the default network interface so leave that default.  I don’t need the additional language packs and I personally select no on the donation of computer resources, but those are up to you and your needs.

Step 3

You will now presented with a summary of the settings that will be used.  Respond with “Y” to continue the installation.

Once the installation gets to the SQL configuration, you will be asked to provide a password for the SQL root account.  Remember that password as you will need it later in the installation.

Step 4

When asked, enter the password you setup for the SQL root user during the previous section.

When the install gets to a certain point, it will ask that you open the management site in a browser to setup the database.

Step 5

Go to the site and click on the blue “Install/Upgrade Now” button(it should be the IP address of your fog server).

Step 6

When that finishes, you will be presented with a link to the login page.  Before going there, go back to the server and finish the installation.

Step 7

At the end of the installation, you will be asked if you want to notify the Fog group about the install.  Again, up to you.  Finally, you will be presented with “Setup Complete!” and you are are ready to login to your new Fog server.  The default username is “fog” and the default password is “password”.  I very strongly recommend changing the password.

Step 8

The last step we will need to take is to add two options to the DHCP scope to tell the PXE clients where it needs to go.  Since we are using DHCP on our domain controller, go ahead and open the DHCP MMC module.  Once you drill into the scope, and scope options, right click and select “Configure Options”.

DHCP Scope Options Location

This will bring up the scope options dialog.  The two options you will be adding are “066 Boot Server Host Name” and “067 Bootfile Name”.

For 066, the string value will be the ip address of your Fog server.  In our case, that is 10.0.100.10.  For 067, the string value will be “undionly.kpxe”.  This is the default boot file for Fog.

Scope Options

Well, now we have our imaging server setup.  Congratulations you made it.  This is yet another step to a better life as a System Administrator.  In the next article, I will be going through some system configuration of the Fog server and how to get it ready to start cranking out those images.

Stay tuned and thanks for visiting.

Machine Imaging With Fog – Setup

Something New

First off, welcome.  Somehow you ended up here to find the fresh beginnings of a potentially interesting blog.  Why am I starting this?  Well, I’m a Systems Administrator in the SMB space.  An army of one if you will.  Generally, in a small to medium sized company, there is typically only one IT person.  This presents some pretty interesting opportunities and challenges.  I am the help desk, printer tech, server admin, network engineer, and anything else that has to do with the technology in the office.  Because of this, I have a breadth of knowledge that I can share.

I don’t claim to be an “expert” in anything and if this is what you are looking for, then I apologize now.  My goal here is to simply share some tips, tricks, and possibly some insight into the things that I have found to work well in an SMB environment.  From networking, virtualization, AD and GPO design, all the way to what hardware seems to work well for me.  Little tidbits of things that I have picked up over the years or cool things that I have come across and learned to make things easier.  Really, that is what it is all about.  Learning new things to make your job easier.

Things will start out fairly slow, so please try to be patient.  I’m not entirely sure where I want to go with this or how well it will do.  Here’s to things to come and thank you for visiting.

Something New