Wednesday, July 26, 2017
GIS Programming (GIS5103): Module 10
In this week's module, we learned aspects of creating a script tool for ArcGIS and how to modify the script to fit the need of the project's tool. We created a toolbox to house the newly created script tool. The parameters were set using the Parameter Tab in the Properties window so that the imported script will function in ArcMap. After setting the parameters in ArcMap, the script had to be modified in PythonWin. The parameters' variables had to be changed to from specifically located files to open input. The arcpy.GetParameter() function was applied to parameters' variables. Along with the changing the variables, the print statements had to be replaced with arcpy.AddMessage() statements so that ArcMap can print the messages in the Results tab.
GIS Programming (GIS5103): Peer Review Article #2
Peer Review
Article #2
Python based
GIS tools for landscape genetics: visualising genetic relatedness and measuring
landscape connectivity
Author: Thomas
R. Etherington
URL Link: http://onlinelibrary.wiley.com.ezproxy.lib.uwf.edu/doi/10.1111/j.2041-210X.2010.00048.x/full
This article discusses using GIS and
Python for landscape genetics research which will help researchers gain a
better understanding of spatial ecological processes. Using GIS is vital to
this particular area of research; however, there is a degree of customization
that is needed to processes the data which often beyond the non-specialist (Etherington, 2011). To remedy this
issue, Python was used to create a series of script based GIS tools that were
specifically designed for landscape genetic studies uses. These scripts allow
the user to convert the files, visualize genetic relatedness, and quantify the
landscape connectivity using the least-cost path analysis (Etherington, 2011). The scripts are stored in the
ArcToolbox that allows free accessibility to them along with the Python code
that created the tools. By creating these Python scripts, researchers are able
to fully utilize the current software; the user community can also provide
farther enhancements to the scripts, and this will cut down on the time spent
on developing common solutions (Etherington, 2011).
Etherington was able to elaborate
how he implemented GIS and Python to his own research very well. Even though
the article was quite short, it was packed with useful information on how
Python can customize tools for GIS. I found this article to be very interesting
because the GIS tools that Etherington created for landscape genetics can
allows be applied to the movement of past and current infectious and contagious
diseases in different parts of the world.
Etherington, T. R. (2011). Python based GIS tools for
landscape genetics: visualising genetic relatedness and measuring landscape
connectivity. Methods in Ecology and Evolution, 52-55.
Monday, July 24, 2017
GIS Programming (GIS5103): Module 9
In this week's module, we learned how to create raster output using Python and we also learned how to identify areas with a particular set of slope, aspect and landcover type parameters. The ideal areas' traits were forest landcover with classification of 41, 42, and 43, slope between 5-20 degrees, and aspect between 150-270 degrees. This assignment was not too difficult to complete this week. The only little hiccup I had when creating the script was accidentally putting the beginning of the if statement as 'if arcpy.CheckOutExtension...' instead of 'if arcpy.CheckExtension...'. This mistake was easily cleared up and the script worked like a charm.
Saturday, July 22, 2017
Applications in GIS (GIS5100): Lab 8
In this week's lab, we had to due damage assessment on areas in Ocean County, NJ that were affected by Hurricane Sandy. We had to prepared the data for the assessment, had to use before and after storm picture to assess the structure damage to each parcel, determine the distance from the coastline where most buildings were impacted by the storm, and combine parcel data with the damage assessment layer.
I mostly rely on the structure damage scale to help me assessment the damage of the building structures on the parcels. The buildings that were completely tore down I defined as Destroyed, buildings that were missing major parts of their structure as Major Damage, buildings with minor parts of their structure damage as Minor Damage, buildings with debris across the parcel as Affected, and buildings that seems like they were not affected as No Damage. The difficult decision for me was determining what structure type the buildings were. If I was able to have information about what each parcel was used from, it would help me out tremendously with identify the structure types. Also, if the post Hurricane Sandy imagery was taken at the same time of day and had similar lighting, it would help with making better deductions of the parcel damages.
From the findings, I was able to note that worst damage that occur to the building structures happen within 100 meters of the coastline and the farther away the buildings were from the coastline; the less damage the buildings had. 100% of the buildings within 100 meters of the coastline were damaged and 95% of buildings within 100 to 200 meters of the coastline were damaged. 19% of buildings within the 200 to 300 meters of the coastline were damaged. The findings seem accurate enough to be used in other areas nearby the study location.
I mostly rely on the structure damage scale to help me assessment the damage of the building structures on the parcels. The buildings that were completely tore down I defined as Destroyed, buildings that were missing major parts of their structure as Major Damage, buildings with minor parts of their structure damage as Minor Damage, buildings with debris across the parcel as Affected, and buildings that seems like they were not affected as No Damage. The difficult decision for me was determining what structure type the buildings were. If I was able to have information about what each parcel was used from, it would help me out tremendously with identify the structure types. Also, if the post Hurricane Sandy imagery was taken at the same time of day and had similar lighting, it would help with making better deductions of the parcel damages.
From the findings, I was able to note that worst damage that occur to the building structures happen within 100 meters of the coastline and the farther away the buildings were from the coastline; the less damage the buildings had. 100% of the buildings within 100 meters of the coastline were damaged and 95% of buildings within 100 to 200 meters of the coastline were damaged. 19% of buildings within the 200 to 300 meters of the coastline were damaged. The findings seem accurate enough to be used in other areas nearby the study location.
Wednesday, July 19, 2017
Applications in GIS (GIS 5100): Lab 7
In this week's lab, we had to use a DEM to determine which areas would be impacted by the sea level rise and to overlay this data with census information to decide the possible number of people that would be impacted by it.
To obtain the two different sea level scenarios of 3 feet and 6 feet, I had to limit the raster processing only to Honolulu since this is the study area. I went into the Environment setting and selected the Raster Analysis which gave me the option to set the mask to Honolulu. After setting the mask, I created two different flood zone scenarios by using the Less Than tool. The tool created a grid with cells that show areas that are flooded with the value of 1 and areas that are not flooded with the value of 0. Once the rasters were made I was able to determine what the total area of the flood zone by multiplying the cell count of flood zone and the cell size. The flood elevation was created by using two different tools. The first tool that was used was the Extract by Attribute which uses the Lidar DEM as one of the inputs and the other input as the SQL expression of “VALUE” < 2.33. This tool gives elevation to the flood zone. To get the depth of the flooded area, the Minus tool was applied. The first input was the Flood Zone 1 which had the sea level of 3 feet and the second input was the alter Lidar DEM of the flood zone elevation. The results allowed us to see the sea level rise of 6 feet would look like in the flooded areas of Honolulu. If my calculations are correct, the first scenario of sea level rise of 3 feet will have the total area of 5,710,950m2 and the second scenario of sea level rise of 6 feet is 23,793,075m2.
To obtain the two different sea level scenarios of 3 feet and 6 feet, I had to limit the raster processing only to Honolulu since this is the study area. I went into the Environment setting and selected the Raster Analysis which gave me the option to set the mask to Honolulu. After setting the mask, I created two different flood zone scenarios by using the Less Than tool. The tool created a grid with cells that show areas that are flooded with the value of 1 and areas that are not flooded with the value of 0. Once the rasters were made I was able to determine what the total area of the flood zone by multiplying the cell count of flood zone and the cell size. The flood elevation was created by using two different tools. The first tool that was used was the Extract by Attribute which uses the Lidar DEM as one of the inputs and the other input as the SQL expression of “VALUE” < 2.33. This tool gives elevation to the flood zone. To get the depth of the flooded area, the Minus tool was applied. The first input was the Flood Zone 1 which had the sea level of 3 feet and the second input was the alter Lidar DEM of the flood zone elevation. The results allowed us to see the sea level rise of 6 feet would look like in the flooded areas of Honolulu. If my calculations are correct, the first scenario of sea level rise of 3 feet will have the total area of 5,710,950m2 and the second scenario of sea level rise of 6 feet is 23,793,075m2.
GIS Programming (GIS5103): Module 8
In this week's module, we had to create a .txt file and using Python script to write into the file. The information that was being written into the file was the point data for all vertices. Each point had to have the Feature OID, Vertex ID, X coordinate, Y coordinate, and the name of the river feature. Writing this script was another challenging one to work on. I had one line of code misplaced in the for loop which allow wrote one line of information in the .txt file. The line of code I had to move was the "output = open("rivers_TScranton.txt","w")". I moved this line of code outside of the for loop and the script worked like a charm.
Wednesday, July 12, 2017
GIS Programming (GIS5103): Module 7
In this week's module, we had to create a new geodatabase, copy all of the data from the Data folder into the new geodatabase, and populate a dictionary with the names and population of every 'County Seat' city in New Mexico. I ran into some issues when trying create this script. I had to use a delimitedField function to be able to use the Search Cursor and populate the dictionary. This script was quite the challenge in completing this week since my knowledge on Programming is still at a beginners' level.
Applications in GIS (GIS5100): Lab 6 Crime Analysis
In this week's lab, we learned about different hotspot mapping techniques that could be applied to finding where certain crimes happening most often. The three techniques were the grid-based mapping, kernel density and Local Moran's I.
1.
When gathering the data for the top 20% of
burglaries that were happening within the grid-based thematic map, I had 507
grid cells with different burglaries counts. I divided the 507 cells by 5 which
gave me the number 101.4. I rounded down to 101 which I made my selection of
the top 20% cells with the highest burglary count. The next map that created to
pinpoint high burglary crime areas was a kernel density map. The parameters that
I used for the kernel density analysis was 100ft for the output cell size and
the search radius to be set at 2640 ft. After using the kernel density analysis
tool, I reclassified the data and excluding any zero values. The mean number
was 35.16 which I multiple it by 3 which resulted as 105.48. Any number below
105.48, I excluded from the map and convert it from raster to polygon. I had to
use the select attribute tool to select features with the grid code of 1 to get
the final resulting hotspot map. In the final hotspot map that was created, the
Local Moran’s I was used. Before using the tool, the Blockgroups2009Fixed.shp
and the 2007 burglaries file had to be spatial joined and a crime rate field
had to be inserted into the data. The crime rate was determined by the number
of burglaries per 1000 housing units. After the Local Moran’s I tool was used,
the select attribute tool helped select the clusters with high crime rates.
Subscribe to:
Posts (Atom)