Tuesday, October 25, 2016

Distance Azimuth Assignment

Introduction

GPS technology has improved a significant amount throughout the years. At this point, it would not be ideal to leave your GPS technology at home when conducting a survey. However, there are situations where GPS technology and equipment will not be available and a knowledge of manual survey techniques is very important to have. Surveying with a grid based coordinate system will but in many cases, it will not be the ideal survey method. In this lab, a basic survey technique with distance and azimuth was used to map out the locations of trees in Putnam park at the UW-Eau Claire campus. 

Study Area

The study area of this project was Putnam park in the UW-Eau Claire campus. The surveying took place on Putnam Trail located behind the Davies Student center. This was an ideal location to survey different tree locations because of its interesting geographical location. One side of the trail is located in a flood plain that turns into a swampy land in the spring. The other side of the trail is part of the famous hill on the Eau Claire campus. Figure 1 is a map of the study area and where the surveying took place on the campus. 
Figure 1. This is a map that shows the study area of a
survey of tree locations in Putnam park. The study area
is shown by the green box. 


























Methods

The class divided into three different groups. Each group had their own origin location where the distance and azimuth was calculated from. Varying forms of technology were given to each group. Some of the equipment that was used was a basic GPS unit to find our origin point, a tape measure to measure out the diameter of the trees that were mapped, a tape measure or a rangefinder to map the distance each tree was from the origin point, and a compass that could calculate the azimuth by looking through it. All of the data was recorded into a notebook to ensure that it could be kept and entered into a spreadsheet later. 

The methods to collecting the data were relatively straight forward. One or two team members would be standing at the origin point in order to collect the initial latitude and longitude of the point. Those team members would also measure the distance each tree was from the origin point as well as collect the azimuth angle. One or two other team members would be located at the tree that was being surveyed. These team members would identify the tree species as well as measure the DBH (diameter at breast height) of the surveyed tree. The remaining team members would be standing by to record the surveyed data. Team members rotated duties so that everybody was able to get experience with each responsibility. Once the required ten trees were surveyed, the group shared the collected data so that everyone had their own hard copy. 

Once all of the groups had completed the survey, all of the data was collaborated into a single spreadsheet. The collaborated data was imported into ArcMap so that the survey could be represented with a map. The Bearing Distance to Line tool in ArcMap used the table that was created to map out the azimuth and distance in a vector format. A line stemmed out from the origin and pointed to each surveyed tree. The next step was to use the Feature Vertices to Points tool to show the origin point along with each of the surveyed trees. 

Results

The original results of this survey were not ideal. Figure 2 shows the original mapping of our points. As you are able to see, the points are not all located inside of the study area. One set of surveyed trees shows up miles south of the study area. It is hard to tell at this point if the reason for error is human error or technological error. There is potential for error in both. It is very easy to incorrectly record the collected data into the spreadsheet. It is also very possible that the GPS was giving an inaccurate location. Regardless There were two sources of error in our original data. One of the sources I am very confident that it was human error and one of our origin points had two numbers mixed up in the X value. Because the study area is such a small scale, minor errors like that can throw off the data to a large degree. 

Figure 2. This map shows the error in the original survey data.
Luckily the errors were easily fixed and the final data is
much more accurate. 




























The final map created includes the fix to the large error as well as a smaller error that offset an origin point less than 100 meters. Luckily the errors that were found in the data were easily fixed. Figure 3 shows the spreadsheet of data that was collected out int the field. Figure 4 shows the final map of the study area and the surveyed trees in the study area. The trees are represented by green triangles and the distance and azimuth data is shown by the orange lines. 

Figure 3. This is the final spreadsheet of data that was collected
in the survey. 

Figure 4. This is the final map of the survey showing the azimuths from the origin points
and the surrounding trees that were surveyed.



Conclusion

It was interesting to learn about azimuths and how we can use them to create maps when we are without technology in the field. For the most part, I think that the data that was collected was pretty accurate. I do believe that the errors that were encountered were due to the data entering process. Fortunately, we only were using three origin points. This made it easy to find where the errors were coming from. I think that this lab helped me to learn about other ways to collect data when technology isn't available for use. This is very important in this field because technology is great, but we can not rely on it. If you rely on technology and it fails, you are going to want to be able to work around that. 



Tuesday, October 18, 2016

Sandbox Survey: Part 2

Introduction

In the previous lab, small scale, digital elevation models were created in a sandbox. Elevation data was collected for the landscape and compiled into a spreadsheet. The spreadsheet was organized into X,Y, and Z points. This is important because the data needs to be in a coordinate format in order for the DEM to be created. The spreadsheet will be entered into ArcMap and the data will be used to create raster that will depict the elevation. There will be five different interpolation methods used to show the elevation changes in the landscape. 

Methods

The first step to this project was to create a geodatabase to hold all of the elevation rasters, the spreadsheet, and the point feature class that needs to be created. Once the spreadsheet was imported, the point feature class was created. With the newly created point feature class finished and entered into ArcMap, the grid format and the square shape of the landscape can be seen. Figure 1 shows the basic point feature class alone in ArcMap.

Figure 1 is the result of the x,y coordinate plot that was created.


The next step was to use tools in ArcMap in order to create the five different interpolations that were required. The five interpolations that needed to be used were, IDW, Natural Neighbors, Kriging, Spline, and TIN. These interpolation methods are defined below. The definitions come from ArcGIS pro and ArcHelp. 
  • IDW (inverse distance weighted technique) assumes that things close to one another are going to be more similar to each other than things further away.
  • Natural neighbors makes sure that unsampled areas are similar to one another and never go above or below the min or max sampled points. 
  • Kringing is an estimated surface that is generated from a set of scattered points 
  • Spline gives a smooth surface by using a mathematical function. This allows for the most pleasing DEM model for this project. 
  • TIN (triangulated irregular network) is an assortment of triangles that are given individual values. For the sake of this project, the values are elevation. 
Once each one of the tools that create these interpolations was run, the individual rasters needed to be opened up in ArcScene. The current rasters do not show any elevation change but just assign values to a 2-D image. ArcScene helps to take those values and create a 3-D landscape. This is how each interpolation method can be assessed the best. 

The 3-D images were then exported back into ArcMap so that it was accompanied by its attributes. The orientation of the images is so that the origin point is at the bottom with the y-axis going to the left and the x-axis going to the right. This was the chose orientation because it does the best to show the large ridge in the landscape. 


Results


Figure 2 below, shows all of the final elevation models from each interpolation method that was used. IDW and nearest neighbor interpolation produced a similar result for this project. Both show each data point with some exaggeration and the surrounding area is relatively smooth. Kringling and TIN both provide a rougher "geometric" looking DEM that do a good job showing the elevation change but not realistically depicting the landscape. Spline is the most effective interpolation method for showing a realistic landscape. 



Figure 2 is the compilation of all of the interpolation methods run on the DEM.
When looking back at the survey that was run, there are certain things that could have been changed in order to have more accurate interpolations. The amount of data points collected was, more than likely, not enough. The grid that the measurements were made on was 11x11 which, looking back on it, is not nearly a fine enough resolution. Some of the pixels in the grid were given multiple measurements to show when large elevation changes were happening. The way that these were recorded was not correct. Instead of evenly dividing 1 pixel into tenths of a pixel, the pixel was marked in sections from one to three. This threw off some of the locations of measurements because if it was supposed to be located 1.75 up on the y-axis, it was marked as 1.3. Now having the experience from this project, if it was to be done again, more points would be added and our data collection method would be fixed. All in all, the models do represent the original landscape pretty well but there would be problems if the survey area was larger. 


Conclusion

To summarize the entire two week project, a landscape was created in a sandpit and groups were expected to conduct their own survey of the landscape elevation with fairly minimal instruction. The data that was collected was entered into ArcMap and turned into a digital elevation model using multiple interpolation methods. This project provided a lot of knowledge about how to go about collecting survey data. This was the most in-depth elevation survey project that I have done so unfamiliarity with some of the processes did show in our results. Overall this project provided a learning experience to the very interesting process of elevation surveying. 





Tuesday, October 11, 2016

Creation of a Digital Elevation Surface

Introduction

  • Define what sampling means, with a strong focus/emphasis on what it means to sample in a spatial perspective.
    • I would define sampling as retrieving data from specific sections in an overall study area. For example, sampling elevation from a plot of land requires taking elevation levels from many spread out points in the study area. 
  • List out the various sampling techniques
    • Random sampling, stratified sampling, cluster sampling, and systematic random sampling are all types of sampling. 
  • What is the lab objective?
    • The objective of this lab is to create a landscape in a sandbox and sample elevation of the landscape in a way that is most efficient to us. The landscape must contain a ridge, hill, depression, valley, and plain. 

Methods

  • What is the sampling technique you chose to use? Why? What other methods is this similar to and why did you not use them?
    • Our group decided to use systematic, stratified sampling. This is because we wanted to create a grid across our landscape and take at least one elevation point from each grid section. It is stratified because we took some extra measurements in specific areas where we could see a lot of elevation change. We didn't want to use a purely systematic sample because we wanted to make sure that areas with a lot of elevation change were more accurately sampled. 
  • List out the location of your sample plot. Be as specific as possible going from general to specific. 
    • Our sample plot represents a costal region with mountains on the coast and the depression and hills on the other side of the mountains. The hills and depressions flatten out into a plain. 
  • What are the materials you are using?
    • We used a 114cm X 114cm wooden box to contain the sand that was molded into a landscape. A measuring stick was used to create the measurements for an even grid system and to measure the elevation inside the grids. Tacks were put on the box to indicate the x,y boundaries of each grid. String was used to create the grid. Finally, a notebook and pencil were used to record the elevations collected. 
  • How did you set up your sampling scheme? Spacing?
    • An X,Y plain was used. A 10X10 grid was created and each grid was 11cm X 11cm. This was about as even as it could be made with the box being 114X114. 
  • How did you address your zero elevation?
    • Sea level was considered the top of the box for us. This means that most of our land will be below sea level. 
  • How was the data entered/recorded? Why did you choose this data entry method?
    • We recored all of our elevations in a notebook and converted it into a spread sheet with values for X,Y, and Z. This will allow for it to easily be entered into the computer program.

Results

  • What was the resulting number of sample points you recorded?
    • We recorded a total of 145 sample points.
  • Discuss the sample values? What was the minimum value, the maximum, the mean, standard deviation?
    • The minimum value was -15, the maximum value was 16, the mean was -3.05, and the standard deviation was 6.13. These values show that the majority of our points were below sea level but the mountains were relatively high above sea level. 
  • Did the sampling relate to the method you chose, or could another method ave met your objective better?
    • I think that our sampling method was the best choice. we decided to take extra points around our large elevation changes so that those extreme changes could be seen. Our overall mean being below sea level is because the mountain and hill were the only areas that peaked above sea level. 
  • Did your sampling technique change over the survey, or did your group stick to the original plan? How does this relate to your resulting data set? 
    • We stuck to our original plan throughout the survey. This didn't really affect our resulting data set. It turned out pretty much how it was expected to.
  • What problems were encountered during the sampling, and how were those problems overcome?
    • The areas where sand was above our sea level were difficult to measure because the string couldn't be placed evenly over them. Our solution was to use the string on the next grid space over and use a ruler to read the measuring stick that was placed in the area being measured. 

Conclusion

  • How does your sampling relate to the definition of sampling and the sampling methods out there? 
    • I think we did a pretty good job sticking to our systematic sampling method a long with taking extra points where we needed to. 
  • Why use sampling in spatial situation?
    • Sampling is an efficient way of evenly collecting data in an organized fashion. Sampling helps to collect spatial data that is needed. 
  • How does this activity relate to sampling spatial data over larger areas?
    • It is the same idea as sampling spatial data over a larger area. The difference is the sampling grid would be a different size and you would actually have to move around to collect the data. 
  • Using the numbers you gathered, did you survey perform an adequate job of sampling the area you were tasked to sample? How might you refine your survey to accommodate the sampling density desired?
    • I think that our survey did an adequate job to represent the sampled area. It is hard to know quite yet if it truly was adequate because the numbers haven't been put into the program that will create the digital elevation model. If the DEM doesn't come out as hoped, I think the biggest thing that we could have done to be more accurate would just be to take more points. Dividing each grid into fourths would give us four points to every one that we collected which would give us a more accurate DEM.











Tuesday, October 4, 2016

Hadleyville Cemetery GIS

Introduction

Hadleyville cemetery is a small cemetery located on a country road in Eau Claire County. Just recently, Hadleyville lost all of their data associated with the cemetery. This means that graves and the people buried in them were not logged in a database anymore. The task at hand was to go to the cemetery, collect the data and map out each grave, and to create a GIS for the Hadleyville cemetery to replace the lost data. A GIS was created because instead of a simple map, a GIS is able to give a visualization of the cemetery as well as keep a database for all of the graves in the cemetery. The data was collected using a notebook. A drone was used to create the arial map of the cemetery and each grave was heads-up digitized into the map. The GIS will allow for the cemetery to keep organized records of each of the graves along with a map for visitors to use. 

Study Area

Hadleyville cemetery is located on the Southern end of Eau Claire County. It is in the town of Eleva and is located on a country road. Figure 1 shows where the cemetery is located. The data was collected in early fall before the leaves began to turn.

Figure 1. Map of Hadleyville cemetery via Google Maps.

Methods

In order to collect the data, groups went to the cemetery and collected each tombstone's data into a notebook. The original plan was to map out each grave using a survey grade GPS. Unfortunately, the GPS was to timely of a process and instead, each grave was heads-up digitized into the GIS. For the task at hand,  a survey grade GPS would have been nice to use, but also may have been overkill. The accuracy of the drone image is high enough that heads-up digitizing is an ideal method when creating the GIS. The reason that data was recorded in a notebook is because technology is not always the most reliable source. As seen in the issue at hand, sometimes digital data can be lost for unknown reasons. A pen and paper is far more reliable for this sort of task. 

Once all of the data and imagery was collected, the next task was to transfer the data into a digital medium that would allow for the creation of a GIS. The first step was to collaborate with each other and put all of the data into a single shared spreadsheet. Once all of the data was combined, the data was normalized. The class decided what the most important aspects of data were for each gravestone. The data that was kept was first and last name, middle initial, legibility, stone type, year of birth and death, if the stone was standing, the occupancy number, and any notes. The last step in the spreadsheet was to assign each grave an ID that would match up with the arial image. The last step was to put the spread sheet into ArcMap and use a table join to join the data to the digitized tombstones on the map. The results of these methods will be shown in the next section. 


Results

Figure 2 shows part of the table once it was entered into ArcMap and joined to the tombstone feature class.  Any spot in the table where there is a "Null" value is either a result of the tombstone not providing the data or the tombstone not being legible. 
Figure 2. Table after data was joined to the tombstone feature class. 














The data from the table was joined to the tombstone feature class that was created. Figure 3 shows the final map of the graves and the arial image. Each grave is shown by a yellow triangle so that it is easily found. 
Figure 3. The final map of the Hadleyville cemetery and each
of the mapped out grave sites. 
The final map that is displayed above doesn't showcase the most important part of this GIS. Figure 4 below shows how each grave can be selected. Once selected, all of the recorded data will be displayed. This will allow people to find graves that they are looking for without having to walk all around the cemetery. Some graves have pictures to go with them. 
Figure 4. Each grave can be selected and will show the data collected.


Conclusion

Overall, the project went smoothly. Some things that would probably have sped up the process would be if each group was more vocal about what rows of graves they were going to collect data for. At first, bringing all of the data together was a slow and confusing process. Regardless, the project was still completed. For the most part, all of the groups collected data in similar ways, which made it easy to read other groups data. Unfortunately, there wasn't enough time to go back to the cemetery and double check that our collaborated data was put together correctly. If the opportunity presented itself, it would be best to take our combined spreadsheet out to the cemetery to check that each grave ID matched up with where it actually was placed. Overall, the survey was pretty successful. Beside missing some of the photos of graves, it seems like all of the data was collected. The good thing about the GIS that has been created, is that it can easily be updated and maintained by the city.