Using Metacoding for Philadelphia LandCare Site Manipulation

For my capstone project, I will be working with Pennsylvania Horticulture Society to develop a streamlined data management system for their LandCare Program. The LandCare Program strives to rehabilitate vacant land parcels in the city of Philadelphia to address urban revitalization and land vacancy issues plaguing the city. The image below shows what some of these land areas can look like.

The project has so far consisted of the construction of a database in SpatiaLite to work with the shapefiles for the project, see the image below for an overview of the GUI. I selected SpatiaLite as it uses the SQLite language and connects seamlessly with ArcMap. This ultimately allows the user to create an manipulate geodatabases using code written in SpatiaLite and immediately access the results in ArcMap.

My first sprint (May 30th to June 8th) consisted primarily in the set up of documentation for the project, in addition to the following items: creating a geodatabase, working to remove sites that have new uses (no longer PHS LandCare sites) as well as writing base code for updating tables based on an excel.

The process to create documentation was decided upon so formal directions would be available for anyone working on this project in the future. In addition to this, I have detailed notes of what I have done thus far and can change information as the project progresses.

Most of the coding that has been done thus far has been metacoded in excel. By doing this, the overall update process is now streamlined to where someone with no SQL experience can use the excel and get valid results. Using the image below, an example to delete a row from a table in SpatiaLite, it is visible that the user only must input a small amount of information such as the table name and the identifier for the row being deleted. This will benefit not only the inexperienced coder in the future, but also anyone who must update many files at once as only one column must be manipulated. The end result will be to gain statements that can be copied and pasted in the query window of SpatiaLite.

So far, there are metacoding files for updating new use sites and updating columns overall. When updating new use sites, we must copy information from both parcels (individual lots) and sites (many lots together) to develop one overarching row of data. This process was broken down into four steps, two with updates and two for deleting the land area from the new parcel and site tables as they are no longer in the maintenance database. The individual update metacoding was created for a quick update like square feet. Using this process, much information can be quickly added to the pre-written code and then executed in SpatiaLite.

Finally, in sprint two, the next process will be determining the process to update parcel and site data for address changes. As many sites may lose a parcel or gain a parcel over the course of the year for maintenance purposes, it is important that adequate files are maintained on current geometries. This will most likely be completed again with many steps in the form of a metacoded file.

Find the best matched polygons for use in a time series using pgAdmin

Goal:
Find the best matched polygons in a time series using a PostGIS Spatial Database
Background:
For previous project a comparison of housing data from the 1930s to current and historic US Census data was required. The geographic portion of the census data, was obtained from the National Historical Geographic Information System (NHGIS) which produces downloadable shapefiles with a GISJOIN field allowing for convenient linking of the shapefile to tables of census data also available through NHGIS. The 1930s housing data was in the form of an image of a 1937 Home Owners’ Loan Corporation (HOLC) map of Philadelphia available here. Very broadly the HOLC graded neighborhoods based on race/class and these grades where then used to determine the likelihood of housing loan assistance. There is debate if the HOLC process normalized race-based home lending practices and then the lead to further housing issues such as redlining. more info.

Process:
The HOLC map data was pulled into a GIS and a shapefile was created of the neighborhood grading (this process is often listed as ‘Heads-up Digitizing’ in job postings) here is a handy tutorial for ArcMap and another using QGIS. Shapefile and HOLC jpeg available on github.

The HOLC neighborhood grading schema does not coincide with any current (or historic) census arrangement so the spatial querying capabilities of pgAdmin and the PostgreSQL database engine was used to match the newly created HOLC digital data with the NHGIS Decennial Census geographic data.
Create a PostGIS Spatial Database and use pgShapeLoader to load the shapefiles (HOLC and NHGIS spatial data)

After much trial and error the following code worked the best:

create table overlap2010census as
with limited as(
select distinct us2010.*
from us2010, holc1937
WHERE ST_DWithin(us2010.geom, holc1937.geom, 0.25)
order by us2010.gid
)
/* the section above limits the query of the 2010 census polygons (Entire US) to those near the HOLC polygons –input can vary based on projection of files*/

SELECT DISTINCT ON (limited.gid)
limited.gid as us2010a_gid,
holc1937.gid as holc_gid, holc1937.name, holc1937.grade,
st_area((st_transform(limited.geom,4326)::geography)) as area_check,
ST_Area(ST_Intersection((st_transform(limited.geom,4326)::geography),
(st_transform(holc1937.geom,4326)::geography))) as intersect_area_2010_meters,
(ST_Area(ST_Intersection((st_transform(limited.geom,4326)::geography),
(st_transform(holc1937.geom,4326)::geography)))/
st_area((st_transform(limited.geom,4326)::geography)))*100 as percent_overlap,

/*calculates the area of the returned census and HOLC polygons and the percent of overlap */

limited.*
FROM limited, holc1937
where limited.geom && ST_Expand(holc1937.geom,1) and ST_Area(ST_Intersection(limited.geom, holc1937.geom)) >0

/* joins the files where the geom overlap*/

ORDER BY limited.gid, ST_Area(ST_Intersection((st_transform(limited.geom,4326)::geography),
(st_transform(holc1937.geom,4326)::geography))) DESC;

/* sorts by the largest amount of overlap, and in conjunction with the DISTINCT ON call
only returns the first instance of the census polygon that overlaps the target polygon */

While this method requires the user to update and repeat this code for each comparison the tables created can then be exported out to your GIS or QGIS can connect directly to your database if you prefer that option. As the query also created a unique identifier for each year and polygon of the Census data, a crosswalk/key table was created using this data which allowed for the Census data (race,income housing status etc.) for multiple years to be attached via the NHGIS GISJOIN field previously mentioned. Based off of the multiyear Census data attached via the crosswalk table comparisons of the HOLC geographic areas could then be made regardless of shifting Census borders.

Courses Offered Fall 2017

The following courses are being offered Fall 2017. All times are 5:30pm to 8pm. Please refer to Banner for more information.

Monday

  • GUS 5062 – Fundamentals of GIS
  • GUS 8065 – Cartographic Design (required)

Tuesday

  • GUS 5062 – Fundamentals of GIS
  • GUS 5065 – Urban GIS (elective)
  • GUS 5069 – GIS for Health Data Analysis (elective)
  • GUS 5161 – Statistics for Urban Spatial Analysis

Wednesday

  • GUS 8067 – Spatial Database Design (required)

Thursday

  • GUS 5063 – Remote Sensing (elective)
  • GUS 5068 – Census Analysis with GIS (elective)

Friday

  • GUS 9187 – GIS Capstone (required, does not meet every week)

Determine crimes between schools and Public Libraries

There are about 550 schools and total 55 public libraries in Philadelphia. From all US small and largest cities, Philadelphia has one of the highest crime rate (44 per one thousand). This project tries to determine the risk for students when they want to go to the nearest public library. It also determines how many schools are in one mile from each library that will show how the libraries are unevenly distributed into the city area or the public libraries did not consider the number of schools around it when they established. This also shows where the schools and library and how many crimes happened around each school and library. So, this will predict the possibility to become a victim with the neighborhood crimes. To do this, I have used PostgreSQL which is easier to calculate and measure the risk on the way from a school to a closest library.

I used three important data sources such as school data as .csv, library data as a shapefile and the crime data that shows all the reported crimes that happened in Philadelphia from 2007 to 2014. All these data are acquired from OpenDataPhilly which is a open data source. After download all the necessary data performed a data normalization to reduce the data redundancy. To use these data with PostgreSQL, need to upload in the SQL server (for the shapefile use the shp2pgsql command and for the .csv file use the simple SQL console). The map below shows the location of schools and libraries,  crime incidences around one mile of each schools  and the numbers of schools within one miles from each public libraries.

1 mile_librarySchool Buffer

After applying some quarries, I have found that there many schools where more than 2000 crimes happened and there are three schools where more than 5000 crimes happened within 1000 feet, and there are 5 libraries where more than 2000 crimes happened in those 7 years. The libraries are unevenly distributed around the city depending on the number of schools. There some libraries in which there are only one school, and there are two libraries where more than 20 schools within one mile. Even there are some schools that are more than one and half miles away from the closest libraries. The figure below shows the lines between the schools and the closest libraries with hundred feet buffer around the lines.


CREATE TABLE phl.shortest_distance_buffer AS
SELECT e.from_school, e.close_library, ST_Buffer(geom,100)::geometry(Polygon,2272) AS geom
FROM (
SELECT d.school as from_school,
d.library as close_library,
ST_MakeLine(d.geom1, d.geom2) as geom
FROM(
SELECT
s.facil_name AS school,
s.geom AS geom1,
A.branch_nam AS library,
A.geom AS geom2,
ST_Distance(s.geom, A.geom) as distance
FROM
phl.all_philly_school as s
CROSS JOIN LATERAL
(
SELECT l.branch_nam, l.geom
FROM phl.philly_libraries as l
ORDER BY l.geom s.geom
LIMIT 1
) AS A) as d) as e;


SELECT a.from_school, a.close_library, count(b.objectid)
FROM phl.shortest_distance_buffer as a
JOIN phl.philly_crime_incident_coded as b
ON ST_Contains(a.geom, b.geom)
GROUP BY a.from_school, a.close_library
ORDER BY count(b.objectid);

The above queries, I have use to make a straight line between each school to the closest libraries and make a 100 feet buffer around the each line. The bottom part of the query count crimes in each buffer. The intention of doing this is to determine the number of crimes happen in each buffer line and to find out the possibility to become victim if a student want to go to the closest library from school. The result of this query shows that there are 14 line distance from schools and libraries where more than 650 crime happen  from 2007 to 2014. Therefore, it is more likely to become a victim with the neighborhood crimes than other 531 line distances from schools to libraries. Alternatively, there are 12 line distances between schools and closest libraries where less than 10 crimes happen from 2007 to 2014.

There are some limitations in the project like the straight lines that created between schools and closest libraries are not the route to get to a library, so, the crime calculations are not right just an assumption. Also, at the time of crime calculation I did not concern the time of crime, and the student’s intending time to go to a library. It is important to consider the time of crime and the intending time for a student to go to a library to make a rational estimate of crimes (including the type of crime) that happen into the buffer areas. Depending on the limitations, the further research can be done in which the researcher can use the PG Routing to get the exact routes with distance between a school and a closest library and connect that result with the crimes considering the time and type of crimes to make an potential report to see the possibilities of becoming a victim by a neighborhood crime when a student intended to go to a library. The research can also find the alternative routes and day time where and when (safest way and time) has very less possibilities to become a victim by the neighborhood crime.

 

For project detail contact email: kamol.sarker@temple.edu

Courses Offered Spring 2017

The following courses are being offered Spring 2017. All times are 5:30pm to 8pm. Please refer to Banner for more information.

Monday

  • GUS 8068 – Web Mapping and Map Servers (elective)

Tuesday

  • GUS 5062 – Fundamentals of GIS
  • GUS 8066 – Application Development (required)

Wednesday

  • GUS 5062 – Fundamentals of GIS
  • GUS 5063 – Remote Sensing (elective)
  • GUS 5066 – Environmental Applications of GIS (elective)
  • GUS 9187 – GIS Capstone (required, does not meet every week)

Thursday

  • GUS 5073: Geovisualization (elective)
  • GUS 5065: Urban GIS (elective)

Courses Offered Fall 2016

The following courses are being offered Fall 2016. All times are 5:30pm to 8pm. Please refer to Banner for more information.

Monday

  • GUS 5062 – Fundamentals of GIS
  • GUS 8065 – Cartographic Design (required)
  • GUS 9187 – GIS Capstone (required, does not meet every week)

Tuesday

  • GUS 5062 – Fundamentals of GIS
  • GUS 5068 – Census Analysis with GIS (elective)
  • GUS 5161 – Statistics for Urban Spatial Analysis

Wednesday

  • GUS 8067 – Spatial Database Design (required)

Thursday

  • GUS 5000 – Remote Sensing (elective)
  • GUS 5067 – GIS and Location Analysis (elective)