Hello. In this video we’re going to discuss validation surveys. Validation surveys can be carried out at the end of the deploy phase as a post-insulation survey and also at the beginning of the diagnose stage to validate an existing network. But in both places, the workflow is similar.
Now, there’s many different vendor and software solutions for performing validation surveys. Some of the popular ones are NetAlly AirMapper, which is on the AirCheck G2 and the EtherScope nXG. And there’s the NetAlly Magnet Survey PRO, which is a laptop-based survey solution. Ekahau does survey and sidekick combo, provide laptop and mobile device survey options. iBwave similarly have a mobile and laptop-based solution. And TamoSoft with their TamoGraph software is a laptop-based survey solution.
And these are just some of the popular options for performing validation surveys. But even with this small lists here, we have options to survey using a laptop, tablet, mobile phone, or dedicated handheld device. So I would strongly recommend doing some research and, if possible, requesting some demos to make sure you find the right solution for you.
Now, in this video we’re going to cover some of the high-level of concepts about validation surveys, but we’re not going to look at the low level details of how you perform a validation survey. For that, I would recommend taking the training offered by your chosen survey solution vendor.
Now, there are two main options that survey solutions offer: active surveys and passive surveys. In passive surveys, our survey adapter will be in listen only mode. That means it will be passively listening to beacon frames and, in some cases, probe response range, too. Passive surveys record information about each beacon and probe frame they receive, such as signal strength and noise. Active surveys, on the other hand, involve connecting to the network and performing some network tasks such as a ping or iPerf at the same time as performing the passive survey.
Now, when performing passive survey, it says two main approaches for collecting data: there’s what we call stop and go and continuous survey. Stop and go surveys involve the surveyor standing in a set location and clicking on the corresponding location on a map within the survey software, at which point of software will scan all the wifi channels and record all the data it hears at that location. Then, you move on to a new location and click again. Whereas continuous survey will constantly collect data for the whole survey time and the scanning results will be evenly distributed between clicked locations.
The big advantage of continuous survey is that we’re generally going to collect a lot more data and we’re not throwing data away when we move between locations. However, continuous surveys have a lot more scope for human error, requiring the surveyor to remember to click accurately. Every time they start to walk, they stop walking. Every time they change direction, when they pause to open a door or close a door, they also need to remember to click. When someone ask you a question, “Hey, what are you doing?” you need to remember to click before you speak, before you pause. Inaccuracy in the clicking or not clicking when you are supposed to can lead to data being recorded in wrong locations, whereas the stop and go surveys are much simpler. You go to a location, you stand, you click, you stand still until all the data’s been recorded, and then you move to the new location and repeat.
It is easy to teach someone which has no experience how to perform a stop and go survey and they will then be able to go and collect survey data for you with a high degree of accuracy. Continuous surveys, on the other hand, would be probably the preferred choice for experienced, trained survey engineers, as it provides the easiest way of collecting the largest amount of data.
Let’s also consider active surveys. Now, active surveys can provide some useful information, but the problem with active surveys is how we interpret the results. You see, the results are always mapped to a location on a map. Where did we drop some pings, or where was the throughput low? And the assumption which is made is that the location where that happened was bad. However, it’s very often not the case. Yet, we experienced a problem in that location, but it was also at a particular point in time.
You see, the low throughput we experienced may just have been due to some network congestion on the wide network at the point in time we took the reading. And the pings we may have dropped might have been due to the client roaming. But, you see, different clients walk in different paths, will roam in different locations, so it won’t always be the same location we roam. And while that might be an indication of a roaming issue, it may be nothing to do with the location. Perhaps the roaming is slow because of latency between the access point and the radiate server. And maybe we need to enable a fat secure roaming protocol to fix the issue.
So, you see, the data from an active survey can be useful, but we need to be careful how we interpret them. Very often they indicate issues which need further investigation to determine the cause. Following a validation survey, it’s important the results are assess against the gathered requirements, and any heat maps are correctly interpreted.
Now, device offsetting is an important tool we have available to use when interpreting survey results. You see, every device receives wifi signals differently. Now, for more information on why that is the case, do what the LCMI, least capable most important, device video in the defined section of this website. Now, with no offsetting, our survey results will show our survey adapter’s view of the wireless network, and there’s two potential problems with that.
One problem occurs when comparing our survey results with our design. It is likely our design did not predict the RF coverage as received by our survey adapter. And the second problem is that our users of the wireless network are unlikely to be accessing the network using the same adapter as we use to survey. In fact, very often survey adapters have much better receive gain and receive a sensitivity than our real clients.
So calculating and applying device offsets can help us to interpret the results of our surveys, and this there’s types of offsets we may calculate. We have a design offset, that is the difference between our survey adapter and predictive design, and we have a device offset, the difference between our survey adapter and client device. Now, one of the easiest ways to calculate the design offset is to place an access point in an open space and carry out a normal survey. Then compare this it to a predictive model with the access point in the same location configured for the same power levels. For client device offsets, I would recommend getting access to a number of real client devices and comparing the signal strength of a survey adapter to data of the client devices.
Now, the error of environment, height, and location of the access points can affect these measurements. So, if possible, do these measurements on site so you’re testing the clients in the environment they were actually going to be used in. And I would also take a few measurements in different locations to get a good average offset value.
Having calculated our device offsets, we can then use them to interpret the results of our survey. Let’s take a look at this warehouse survey I’m showing here. At the moment, we’re looking at the as-measured survey results, so it’s showing me the RF coverage as seen by my survey adapter. But let’s try and apply some offsets to its coverage.
We’ll set up by looking at a ruggedized tablet device, which they use on the warehouse floor. And having applied it, although we see a few more orange and yellow areas, we’re still generally meeting our requirement of negative 67 DBM across the warehouse floor. If we were to look at another device offset we took, maybe with the iPhone, we can see there’s more areas now where we’re not meeting our requirement. However, we also have to ask question, “Is an iPhone a device that’s going to be used in the warehouse?” What are the critical devices? Well, one of the critical devices for this warehouse was a barcode scanner, which we also took a device offset for. So when we look at how the barcode scanners CDRF, we actually get a picture of showing that we don’t really meet our requirement of minus 67 for this barcode scanner. So if it had been determined that our barcode scanners require a signal strength of neg 67 or better, we would be at a stage where we’d need to look at this RF design and see what optimization is needed.
Validation surveys should involve spectrum analysis to identify and locate interference sources. Now, spectrum analysis may be built into your survey solution or it might run as a separate hardware-software solution. Now, when you’ve identified any interference sources, we generally have two options for optimization: either the interference source is removed or the wireless LAN is configured in such a way to avoid the interference. Now, the chosen method will largely be dependent upon how critical the interfering system is to the business and its operations in comparison to the wireless LAN.
Now, let me finish this video by saying having the tools and skills required to validate a wireless network against a set of requirements should be considered essential for a wifi field engineer. So, thank you for watching. Goodbye.
Validating a wireless network should involve the following activities:
-
RF Survey to validate RF coverage
-
Validate correct roaming behavior
-
Validate application performance and service availability
-
Spectrum Analysis to identity the presence of any interference sources
Popular RF validation tools:
Ekahau:
https://www.ekahau.com/
iBwave:
https://ibwave.com/ibwave-wi-fi-suite/
netAlly:
https://www.netally.com/
Next Videos