header image
Check this page for updates on upcoming classes, our learning goals, and the lesson modules we'll use to get there.
return home

DAT 102: Introduction to Data Analytics

Course Schedule and Lesson Guides

Each class session has it's own box:

Jump down to current week


Week 1: Sat 20 Jan 18 - Friday 26 Jan

Data schemas, data collection, and essential analysis

Study Topics:

  • Organizing structured data in tables
  • Approaches to data sampling
  • Collecting (Messy) data
  • Using spreadsheets for first-pass analysis

Learning Objectives

  • Design a coding schema for basic data table work
  • Conduct basic analysis on structured data
  • Respond intelligently to the question: what is data analytics?

Learning Exercises:

  1. Learn about one another with a structured data creation exercise
  2. Grading performance analysis
  3. Data ratings algorithms and checking them

Learning Resources

Doing Data Science code set

Columbia Data Science Program

Western PA Regional Data Center


Spacing

Vehicle study columns:

  1. Visible use of mobile computer device/phone
  2. Driver not looking at road ("The Look")
  3. # passengers
  4. Youth in car(boolean)
  5. Sloppy driving (swerving, crossing lane lines, running lights)
  6. Vehicle type: passenger car/ pickup, commercial vehicle (tractor trailer, delivery, etc.), mass transit (bus, large van), Motocycle

Please note: weather, location, speed limit, street quality, surrounding features, time of day, temp deg F, method of observation


Week 2: Sat 27 Jan - Friday 2 Feb

Data table design & manipulation | Python crash-course | File conversions

Learning Objectives

  • Create data set tables designed for analytic purposes: clear column headers, data dictionaries, etc.
  • Conduct field gathering of data and entering of that data based on data dictionary
  • Implement data cleaning procedures to address observational inconsistencies and general "messiness"
  • Create a data manipulation and work process log for our distracted driving data
  • Export data as a CSV, verify the export, and read that data into python
  • Diagram essential python data structures: dictionaries and data frames
  • Conduct basic data set manipulations in Python\

Mid-week to-dos

  1. Prepare to work on these steps by Tuesday evening and provide Eric feedback on sticking points or areas for additional learning by midnight on Tuesday. Either call (leave a VM) 412.894.3020 or--if you really despise talking--send an email to him<./li>
  2. Research and find/purchase/borrow/rent a statistics text that you can study happily and work exercises. I have used Statistics for Business and Economics by McClave and am working on studying the classic Introductory Probability and Statistical Applications by Meyer.
  3. Install and test Python 3 on your computer.
  4. Install and test the Anaconda 3 library of data-related python tools.
  5. Try running Jupyter Notebook which is a python tool for interactive programming and saving python scripts. It is a package already installed with Anaconda 3--no extra installs needed, just figuring out how to run the server and access it in a web browser.
  6. Download and install Libre office if you ever anticipate not having full access to MS office products. Once installed, create a spreadsheet, export it as a CSV, and import it into, say, google sheets, to test your work.
  7. Devote 3-4 hours tinkering with CSV imports using the python tutorials linked here and found online. Place a priority on reading about any fundamentals you may not have in your tool set. Use this as a time to assess your programming acumen and interest in going more in depth into Python.
  8. Work through basic pandas tutorial--focusing on reading data in and printing out the object information
  9. Explore one other package in the python standard library by importing it and creating the object and calling a few methods on that object.

Learning Exercises:

  1. Review our sampling of car data on the computer
  2. Discuss sampling methods and their importance in the foundational value of our data sets
  3. Conduct a group observational experiment of car data
  4. Format and process this data in CSV format and export into Python

Learning Resources

  1. Distracted Driving Data Shared Drive
  2. Python File resources on this site
  3. Python Documentation
  4. Reading on Sampling from the University of Texas at Dallas
  5. Use a non-archiving search engine like DuckDuckGo to find related resources to the weekly to-dos.


Week 3: Sat 3 Feb - Fri 9 Feb

Back to basics with Pivot tables and basic python

Learning Objectives

Learning Exercises:

  1. Locate a dataset of interest and brainstorm relevant questions that can be answered with a pivot table
  2. Explore pivot table basics in MS Excel or in Libre Office
  3. Answer the inquiry questions using that Pivot table and make a parallel pivot chart
  4. Write up your mini-analysis in this shared google doc. Use the Table of Contents on page 1 to jump down to your dedicated page. NOTE: you must be logged in to ANY google account to paste in screen shot images of your pivot tables!
  5. Using your own dataset, use python to do that CSV processing

Learning Resources

  1. Libre Office Calc version of our Allgheny County Jail census data and its pivot table
  2. WPRDC data set
  3. MS Excel Pivot Tables tutorial
  4. Libre Office pivot tables tutorial
  5. Function/Formula overivew for MS Excel

Mid-week Exercises: Spreadsheet Madness

Based on last week's class, we have growth needs in spreadsheet fundamentals. These exercises will ask you to process some data in a spreadsheet using a series of functions. You'll be given links to various core skills in spreadsheets. We'll review them in class on Saturday.

Note that these steps will provide you links for resources on using functions in Libre Office Calc since this is a free and open source program anybody can download. You can find similar documentation on MS Excel with an internet search. Many of the functions have the same name across programs.

  1. Download a CSV version of the non-traffic citations issued in Allegheny county on the WPRDC.
  2. Open this file in a spreadsheet (either Libre office or Microsoft Corporation's Excel) and save the file as a native spreadsheet file (i.e. either a .ods file or .xlsx). This allows you to use the full features of the spreadsheet and preserve any pivot tables you create.
  3. Let's make a backup of our data so we can always go back and restart with little hassle. Do this by duplicating the tab that contains the raw data. Label the two tabs logical names such that one is our master original data copy and the other is the processed data. L
  4. Take some time to browse the data, using the data dictionary as your guide. As yourself: what kind of questions can I ask about this data? Is it clean? Do I see fields that I can easily crunch or do they need to be formatted at all? Are there data fields that I might need to adjust to be easily processable?
  5. This step sequence will help you develop spreadsheet skills necessary to answer these brainstormed questions, which we'll then discuss in person on Saturday:
    • Which neighborhood has the highest incident frequency?
    • Of the neighborhood with the highest incident frequency, which offense type was the most frequent? What conclusions can we draw about severity of offense and frequency of offense?
    • Are black folks more likely to be cited than white folks?
    • Are younger folks more likely to be cited for a certain crime than older people? Which ones?
  6. We need to do some trimming of this data, particularly the data column and our ugly offenses field. Start by generating a new column next to the existing "CITEDTIME" column. Use the formula called 'Left' to extract only the year, month, and day, discarding the time.
  7. Now let's wrestle with the "OFFENSES" column. First, the data are ugly--some are all caps, some are not. The section and subsections are not a consistent length, so we can't just extract those with a nice left() or right() function call. Let's start by getting them all to be in lowercase for consistency (this way it doesn't look like some offenses are more important than others just because they are in all caps). Scan the list of text functions in Libre office for an appropriate function to convert all that text to lowercase. Remember, you'll want to create new columns for each adjustment to the original data. We can always hide the original data.
  8. Before we process the offense data, let's make sure we don't have any trailing spaces at the end of our offenses. Find the name of the function that will automatically remove non-printing characters from the beginning and end of data in a particular cell. You get to do the searching for this one!
  9. This is a tricky step and it will take time and PATIENCE. It may take you up to an hour to get this working right--but is a rite of passage in spreadsheets since this formula is so powerful. We would have liked to be given an office ID that is a consistent length that we could use for sorting and displaying. But we weren't given that data--only the ugly legal code section, subsection, etc. What's worse, those are not in a consistent format. Some have () in them, or more than one pair of (). Some have decimal places, others dont. We want to generate our own set of offense IDs and attach these IDs to existing offense columns. We're going to do this with a special function called VLOOKUP(). The idea behind this function is that we can create a key table that associates unique offense names and our own offense IDs that are pretty, like 1000 and 10001. VLOOKUP will search our data column called OFFENSE and see if there is a matching value in our little lookup table. If a match is found in our lookup table, our own offense IDs are returned as the output of the VLOOKUP() function call. This is handy, but tricky to get to work correctly.
  10. Start by reading the documentation on Vlookup from libre office help. This is not a very helpful guide on this function. But we always start with the official documentation and go from there. This page looks ugly, but is one of the most complete discussions of using VLOOKUP in Libre office.. You can also find some video tutorials out there, but I think videos are tedious and not very good as references, which I generally want.
  11. Go ahead and try to generate a new column in our data table called "offenseID" and populate it with the offense IDs that you created in a reference table that VLOOKUP uses. (The analog is matching up the letter A with a value of 90 to 100 in a final grade percent table--we are looking up offense IDs instead of letter grades). When creating your lookup table, you'll want to list all of the unique values found in the original data in one column, and give them a pretty ID number of your choosing. So your first row in this lookup table will be something like "5511(c)(1) CRUELTY TO ANIMALS" in column 1 and a number like 1000 in column 2.
  12. When building your lookup table, you need to determine all the unique values in the OFFENSES column. Here is a discussion of some ways to do this in Libre Office calc.
  13. Whew! With your new offense IDs generated, you're ready to start analyzing the data with filters, pivot tables, and charts.
  14. Filter all the columns such that you can sort by unique values, and exclude values that you don't want to work with. Here's the Ahuka tutorial.
  15. Use your pivot table skills to mush the data around to look for answers to the above inquiry questions or ones you generate on your own. Prepare to discuss these in person on Saturday.

Week 4: Sat 10 Feb - Fri 16 Feb

Return of the spreadsheet | OpenRefine Magic

We must never underestimate the importance and value of spreadsheets as the foundation for data analysis. This lesson will review essential spreadsheet operations. We'll also introduce a tool designed for more powerful data cleaning and replacement (but weaker on analysis tools) called OpenRefine. The mid-week exercise will involve producing a small data analysis project through its lifecycle.

check_circleLearning Objectives

bookLearning Resources

  1. OpenRefine's GitHub Account with install instructions
  2. OpenRefine's Expression Language Reference
  3. The master data repo: Data is Plural Blog Archive and repo links (Make a copy into your own Google Drive account)
  4. Eric's data processing example: Nuclear Blast record

listLearning Exercises:

  1. Review the following data journalism piece by the local outfit: PublicSource. Data journalism example: Let's Talk About race and statistics every Pittsburgher should know. Consider the following questions:
    • What makes the graphs presented effective? How do you feel about the "drawing" effect used in the graphics?
    • How did Public Source use honest journalism principles in their article?
    • This article didn't present any analysis or political opinions explicitly--but are there implicit messages given in this data? How do you feel about this approach (sharing policized data without analysis)?
    • What is the source of this data? Does knowing the source add or detract from the value of this article?
  2. Let's stretch our spreadsheet muscles with school-related civil rights (discipline) data located on our server here. Download this spreadsheet and follow these analysis guidelines.
  3. Review the data guides for this CRDC data published by the office of civil rights.
  4. In your spreadsheet, copy the initial raw data table. Rename the initial tab something like "rawData". In your copied tab, delete unwanted columns to isolate the variables of interest.
  5. Shorten column names, remove spaces, etc.
  6. Cut out the rows to isolate the level of analysis you are targeting: county, district, or school.
  7. Calculate a per enrolled student metric (field / total enrollment) for each of your columns of interest.
  8. Generate inquiry questions related to the connection between school size or school type and your field of interest.
  9. Use min/max/stdev functions to compute summary statistics for each of your fields of interest (5-6)
  10. Create a pivot chart to help you answer your inquiry questions. Once you have isolated the data you want, generate a few charts from the pivot table that shed light on your conclusions.
  11. Publish your results in our shared google doc here

Open Refine Practice

  1. Acquire our CSV of nuclear explosion data from our server
  2. Load this data into OpenRefine
  3. Use the Facet functions to clean up the blast size field so it's all potentially numeric
  4. use the toNumber() function to convert this field to numeric values
  5. Categorize the blasts based on their size into three classes: small, medium, and large blasts
  6. Export data to CSV and open in a spreadsheet
  7. Develop inquiry questions based on this data
  8. Follow standard analysis procedures in a spreadsheet to uncover the answers. Prepare to share.

extensionMid-week to-dos

cakeNEW Products to Produce Mid-Week


Week 5: Sat 17 Feb - Fri 23 Feb

Visualization tools | US Census and ACS data processing

check_circleLearning Objectives

bookLearning Resources

  1. US Census data is all accessible through the American FactFinder portal located here.
  2. Directory listing of class files: including sample database and chart spreadsheet

listLearning Exercises:

  1. Let's begin with a discussion of a fascinating use of census data to visualize distributions of people by race in the USA. Investigate this "Racial Dot Map" tool created by researchers at the University of Virginia. Consider these discussion questions:
    • Where did you explore first? What were your first impressions of this data?
    • What makes this an effective data visualization tool? What are its limitations?
    • Explore the researcher's data portal by clicking the "what am I looking at?" Link. What principles of good data analysis are exhibited?
    • What additional layers of data would you like to add to this racial dot map? What conclusions or ideas would adding this data allow viewers to consider or conclude?
  2. Browse the American Community Survey data to find two geographies of interest over two time periods in which you can investigate change in some set of variables. You'll need to make sure that you have data on the same geography level for the tables you choose in both years.
  3. Download both tables and import them into a spreadsheet
  4. Clean the columns by developing sensible column names, deleting the columns you don't want (probably margin of errors for this practice activity). Remember--No strange columns and no spaces in variable names!
  5. Save this file and import into OpenRefine to clean up fields. Delete records in which there is very little data. Replace no-value markers with 0 so we can use numeric functions on the fields
  6. Export the data from Open Refine back into a spreadsheet
  7. Copy the cleaned data into LibreOffice base and create a master table with joined data on a key column for export
  8. Extract data from the database back into the spreadsheet for visualization
  9. Visualize the data you've gathered and do the write-up in the shared google doc located here.
  10. Prepare to give a short presentation on this data at the start of the next class.

extensionMid-week to-dos

cakeProducts to Produce


Week 6: Sat 24 Feb - Fri 2 March

Mapping Fundamentals | Spatial data analysis | QGIS Glory | CartoDB magic

check_circleLearning Objectives

bookExternal Resources

  1. Cartography on Wikipedia: Essential concepts and interesting history on the field of mapping and geospatial analysis
  2. QGIS main page: The free and open source competitor to ESRI's $1000+ Arcmap software. It's documentation is extensive and amazing.
  3. US Census data is all accessible through the American FactFinder portal located here.
  4. The actual map polygons are all published through the TIGER shapefiles program
  5. Carto (formerly CartoDB) is a free tool for creating and publishing a variety of map types online. Great for sharing and visualizing and embedding but lacking the spatial join tools available in QGIS.

listLearning Exercises:

  1. Explore the essential elements of map creation with the physical mapping projection activity (enter old-school overhead transparency films!)
  2. Check out examples of cool online maps: Eric's CRDC black/white suspension ratios dot mapand CMU Students for Urban Data Systems Police jurisdiction maps and Mark Egge's bus bunching analysis and map. What principles of honest data journalism are employed? How does qualitative and quantitative data blend or not blend in these examples?
  3. Navigating mapping with carto online: Visualizing and tinkering with Allegheny County Property Assessment Data accessed through the Western Penssylviaia Regional Data Center's Property Information Extractor. We'll cook up a data set to export and dump it into Carto to tinker with map basics: choropleth map analysis, layers, annotations, presentation.
  4. Explore QGIS as a fully featured mapping tool with support for layer building, data joining, advanced geospatial analysis, and exporting/sharing.
  5. Build a map of our US Census data we analyzed last week using QGIS and export up to Carto.

buildMap analysis in QGIS assembly sequence

  1. Generate compelling inquiry questions that you hope to answer (at least partially) through your analysis.
  2. Create a blank map in QGIS
  3. Assemble the "flat file" of US Census data that you'd like to map. Make sure to preserve the geographic entity id (usually called GEOID) since that's what we'll use to join the flat file data to the geospatial features.
  4. Download the appropriate TIGER shapefile (resource link above) and import into QGIS. Make sure the data seem reasonable--are these indeed the geospatial features you'd like? Ensure a correspondence between the year of your CENSUS data and the year the Shapefile data was published (the US Census adjusts the shapes of the geospatial features every 10 years). Do a sample lookup of a GEOID in both the flat file table and the map features to make sure the join is likely to find matches for some or all of the features you'd like to visualize data for.
  5. Join the flat file data to the map features using the data join process described in this tutorial
  6. Be sure to open your data table for the geospatial layer and make sure the join worked.
  7. In the symbology tab of the polygon layer choose appropriate data fields to visualize and tinker with the bin size, etc. of the coloring to make sure you're drawing out the essential data that speak to your inquiry questions.
  8. Export your map as an image and load into our weekly project assembly document.
  9. Thoughtfully explain your data analysis process, your inquiry questions, and the conclusions your data can (and cannot) support in your section.

extensionMid-week to-dos

cakeProducts to Produce


Week 7: Sat 3 March - Fri 9 March

Exposure to Python and Using python scripts | Taste of regression | Project Design

check_circleFinal Project Share Directory

Please compile all of your project-related files into a single location and upload them to your specific sub-directory in our shared drive linked below.

Required Components

  1. Original/Raw data sets labeled as such, in its OWN directory
  2. Processed data files labeled appropriately
  3. Project outline as a PDF: This should be a document with nice headers and carefully proofed text that includes: background on your project, inquiry questions, analysis steps, conclusions, and ideas for further exploration. Include snaps of any visualizations you use.
  4. Any graphs or charts or images used--properly titled.
  5. A work process log that is a step-by-step list of what you did to analyze your data: tools used, files moved, functions/libraries, etc.
  6. The source code of any scripts (such as that generated by open refine) used in the project
  7. A file called readme.txt that lists the files in this directory and why they are included.

Week 8 [LAST SESSION]: Sat 10 March

Share final analysis projects with client | Celebrate the promise of data analytics

check_circleLearning Objectives

listSpecial Guest Conversant

Our class had the pleasure of hosting professional data analyst Mark Egge, a consultant at High Street consulting, based in Pittsburgh, PA. Mr. Egge lives in Bozeman, MT

cakeProducts to Produce