Page 22 of 171

What is this place? Des Plaines “park” edition

Screenshot shows the “park” in Des Plaines, Illinois.

I was methodically reviewing features on OpenStreetMap that could benefit from additional attributes, namely missing names and cities. The information on OpenStreetMap feeds into Chicago Cityscape so that the real estate information service I created can show people looking up addresses the locations of nearby amenities, including parks.

This particularly large park in Des Plaines had no name and no city, so I started investigating. From the overhead imagery view, it looked to me like a landfill.

Since there was a forest preserve to the east, I looked in the Forest Preserve District of Cook County’s nice interactive map for any properties near Beck Lake. Nothing found.

Overhead imagery of the so-called “park”. Source: Mapbox

I looked on Google Maps Street View for some insight and there was no name. (Note that data from Google Maps cannot be used to amend OpenStreetMap because of Google Maps having a data license that isn’t compatible with OSM’s openness requirements).

Next I looked at the Cook County parcel map that Chicago Cityscape has to identify its “Property Index Number” (PIN). All of it fits within 04-31-300-003-0000 – the database pointed to “Catholic Cemeteries” as the property owner.

I looked up that PIN on the Cook County Recorder of Deeds to try and find more information. Lo and behold, the most recent document that was recorded against the PIN was for a “Memorandum of Option Agreement” that identified the Catholic Bishop of Chicago as the landlord and Patriot Acres, LLC, as the tenant.

Patriot Acres has a website that describes a new commercial composting facility being built at this location.

I re-tagged the feature on OSM with “amenity=waste_disposal” and “waste=organic” to properly describe it as a composting facility, and not a park. Case closed.

Chicago bonfire rules

I believe that more people are hosting more backyard bonfires in Chicago in order to spend time with friends outdoors, because it’s one of the lowest risk ways to still enjoy life during the COVID-19 pandemic that Trump is prolonging.

The City of Chicago has rules on what can be burned. Be sensitive to neighbors who rely in keeping their windows open to cool and ventilate their homes.

The summary version is this: You can burn manufactured firewood that you purchased, and you can burn wood logs from any trees someone manually cut. All logs have to be untreated and two feet long or shorter.

You can’t burn scrap wood, lumber, plywood, or particle board. You also cannot burn garbage. Avoid having bonfires when the wind speed is 15 miles per hour or faster.

Read all the rules in this PDF. The rules aren’t in the municipal code; the code in section 11-4-740 says that the environment commissioner shall make rules (Chicago doesn’t currently have a department of environment).

This is my process to improve the performance of my PostgreSQL database and queries

Every now and then I’ll run the queries below to help me figure out ways to optimize the queries my application is using, and to figure out if certain tables and views (relations) need to be optimized.

Updated January 18, 2020, to add a query to alter the autovacuum and autoanalyze factors on a per table basis because I have a “cache” table that is updated quite frequently and generates many dead tuples every minute. The table would balloon from 250 MB to 15 GB in a few hours. I hope that reducing these factors to just a threshold of 100 dead tuples will prevent the inflation. I also need to monitor this that it doesn’t make my RDS database work too hard.

/* Show the last time at which every table and view was vacuumed and analyzed */
SELECT relname, last_vacuum, last_autovacuum, last_analyze, last_autoanalyze  
FROM pg_stat_all_tables  d_d
WHERE schemaname = 'public';  

/* See which queries are most popular */
SELECT * FROM pg_stat_statements where calls >=1000; /* replace 1000 with any integer, depending on how active your database is queried */
select pg_stat_statements_reset();

/* Kill a specific query */
select pg_terminate_backend(INTEGER); /* replace INTEGER with the process ID of a specific query */

/* Get the size of the database */
select pg_size_pretty(pg_database_size('DATABASE_NAME')); /* replace DATABASE_NAME with the name of your database */

/* Kill queries that are taking too long; replace DATABASE_NAME with the name of your database */
SELECT query, pg_terminate_backend(pid) 
    FROM pg_stat_activity 
    WHERE datname = 'DATABASE_NAME' 
      AND pid <> pg_backend_pid() 
      AND (state = 'active' 
      OR state = 'idle') 
      AND state_change < current_timestamp - INTERVAL '15' MINUTE; /* change 15 to any integer that you think means a query is taking too long */

/* See which queries are running */
select * from pg_stat_activity;
SELECT * FROM pg_stat_activity WHERE wait_event IS NOT NULL AND backend_type = 'client backend';

/* Find very large indexes and indexes that aren't used much */
WITH table_scans as ( 
    SELECT relid, 
        tables.idx_scan + tables.seq_scan as all_scans, 
        ( tables.n_tup_ins + tables.n_tup_upd + tables.n_tup_del ) as writes, 
                pg_relation_size(relid) as table_size 
        FROM pg_stat_user_tables as tables 
),
all_writes as ( 
    SELECT sum(writes) as total_writes 
    FROM table_scans 
),
indexes as ( 
    SELECT idx_stat.relid, idx_stat.indexrelid, 
        idx_stat.schemaname, idx_stat.relname as tablename, 
        idx_stat.indexrelname as indexname, 
        idx_stat.idx_scan,
        pg_relation_size(idx_stat.indexrelid) as index_bytes, 
        indexdef ~* 'USING btree' AS idx_is_btree 
    FROM pg_stat_user_indexes as idx_stat 
        JOIN pg_index 
            USING (indexrelid) 
        JOIN pg_indexes as indexes 
            ON idx_stat.schemaname = indexes.schemaname 
                AND idx_stat.relname = indexes.tablename 
                AND idx_stat.indexrelname = indexes.indexname 
    WHERE pg_index.indisunique = FALSE 
),
index_ratios AS ( 
SELECT schemaname, tablename, indexname, 
    idx_scan, all_scans,
    round(( CASE WHEN all_scans = 0 THEN 0.0::NUMERIC 
        ELSE idx_scan::NUMERIC/all_scans * 100 END),2) as index_scan_pct, 
    writes,
    round((CASE WHEN writes = 0 THEN idx_scan::NUMERIC ELSE idx_scan::NUMERIC/writes END),2) 
        as scans_per_write, 
    pg_size_pretty(index_bytes) as index_size, 
    pg_size_pretty(table_size) as table_size, 
    idx_is_btree, index_bytes
    FROM indexes 
    JOIN table_scans 
    USING (relid) 
),
index_groups AS ( 
SELECT 'Never Used Indexes' as reason, *, 1 as grp 
FROM index_ratios 
WHERE
    idx_scan = 0
    and idx_is_btree 
UNION ALL 
SELECT 'Low Scans, High Writes' as reason, *, 2 as grp 
FROM index_ratios 
WHERE
    scans_per_write <= 1
    and index_scan_pct < 10 
    and idx_scan > 0 
    and writes > 100 
    and idx_is_btree 
UNION ALL 
SELECT 'Seldom Used Large Indexes' as reason, *, 3 as grp 
FROM index_ratios 
WHERE
    index_scan_pct < 5
    and scans_per_write > 1 
    and idx_scan > 0 
    and idx_is_btree 
    and index_bytes > 100000000 
UNION ALL 
SELECT 'High-Write Large Non-Btree' as reason, index_ratios.*, 4 as grp 
FROM index_ratios, all_writes 
WHERE
    ( writes::NUMERIC / ( total_writes + 1 ) ) > 0.02 
    AND NOT idx_is_btree 
    AND index_bytes > 100000000 
ORDER BY grp, index_bytes DESC ) 
SELECT reason, schemaname, tablename, indexname, 
    index_scan_pct, scans_per_write, index_size, table_size
FROM index_groups; 

/* Modify the autovacuum and autoanalyze factors for a single table 
https://klotzandrew.com/blog/posgres-per-table-autovacuum-management */
ALTER TABLE cache SET (autovacuum_analyze_scale_factor = 0, autovacuum_analyze_threshold = 100);

WITH raw_data AS (
  SELECT
    pg_namespace.nspname,
    pg_class.relname,
    pg_class.oid AS relid,
    pg_class.reltuples,
    pg_stat_all_tables.n_dead_tup,
    pg_stat_all_tables.n_mod_since_analyze,
    (SELECT split_part(x, '=', 2) FROM unnest(pg_class.reloptions) q (x) WHERE x ~ '^autovacuum_analyze_scale_factor=' ) as c_analyze_factor,
    (SELECT split_part(x, '=', 2) FROM unnest(pg_class.reloptions) q (x) WHERE x ~ '^autovacuum_analyze_threshold=' ) as c_analyze_threshold,
    (SELECT split_part(x, '=', 2) FROM unnest(pg_class.reloptions) q (x) WHERE x ~ '^autovacuum_vacuum_scale_factor=' ) as c_vacuum_factor,
    (SELECT split_part(x, '=', 2) FROM unnest(pg_class.reloptions) q (x) WHERE x ~ '^autovacuum_vacuum_threshold=' ) as c_vacuum_threshold,
    to_char(pg_stat_all_tables.last_vacuum, 'YYYY-MM-DD HH24:MI:SS') as last_vacuum,
    to_char(pg_stat_all_tables.last_autovacuum, 'YYYY-MM-DD HH24:MI:SS') as last_autovacuum
  FROM
    pg_class
  JOIN pg_namespace ON pg_class.relnamespace = pg_namespace.oid
    LEFT OUTER JOIN pg_stat_all_tables ON pg_class.oid = pg_stat_all_tables.relid
  WHERE
    n_dead_tup IS NOT NULL
    AND nspname NOT IN ('information_schema', 'pg_catalog')
    AND nspname NOT LIKE 'pg_toast%'
    AND pg_class.relkind = 'r'
), data AS (
  SELECT
    *,
    COALESCE(raw_data.c_analyze_factor, current_setting('autovacuum_analyze_scale_factor'))::float8 AS analyze_factor,
    COALESCE(raw_data.c_analyze_threshold, current_setting('autovacuum_analyze_threshold'))::float8 AS analyze_threshold,
    COALESCE(raw_data.c_vacuum_factor, current_setting('autovacuum_vacuum_scale_factor'))::float8 AS vacuum_factor,
    COALESCE(raw_data.c_vacuum_threshold, current_setting('autovacuum_vacuum_threshold'))::float8 AS vacuum_threshold
  FROM raw_data
)
SELECT
  relid,
  nspname,
  relname,
  reltuples,
  n_dead_tup,
  ROUND(reltuples * vacuum_factor + vacuum_threshold) AS v_threshold,
  n_mod_since_analyze,
  ROUND(reltuples * analyze_factor + analyze_threshold) AS a_threshold,
  c_analyze_factor as caf,
  c_analyze_threshold as cat,
  c_vacuum_factor as cvf,
  c_vacuum_threshold as cvt,
  analyze_factor as af,
  analyze_threshold as at,
  vacuum_factor as vf,
  vacuum_threshold as vt,
  last_vacuum,
  last_autovacuum
FROM
  data
ORDER BY n_dead_tup DESC;

My investment in crowdfunded projects hasn’t gone well

A second project that I invested in through the crowdfunding portal, Small Change, has “failed”. It’s a project for a worker-owned bed and breakfast. According to the project owner, the project needed more funds than what they were able to obtain from Small Change investors and other investors. The owner found a new investor, who required that all Small Change investors be removed from the project. They announced that to us earlier this year, and said we would receive our initial investment + 3%.

The Small Change leadership didn’t think this plan was good enough, starting with disagreeing the removal of Small Change investors.

Eventually, the project owner returned our initial investment + 8%. I don’t know if the project owner had the right to cancel and return our investments, but I’m happy to have my money back + 8%.

I’m disappointed in the ultimate outcome of not being able to participate in 1476 Magazine because this is the second project I invested in via Small Change that has not succeeded. The first was for a group of three single-family houses built to a high energy efficiency standard that have not yet sold.

I have an investment in a third project (in Philadelphia) that is converting a disused industrial building into mixed-use with residential and I hope that one succeeds, but I haven’t received an update since it was funded. Update: I inquired and the project owner sent me the update they sent in the last couple of months, which I seemed to have missed.

Golaski Labs in Philadelphia. A Small Change poster advertises the project. The photo was taken in 2018, and significant progress has been made since then.


I believe in the Small Change mission, and crowdfunding broadly, but once I made my third investment I paused so that I could wait to see what happens with all of the projects.

Chicago’s Best Landlords, Inc.

3901 W Jackson Blvd / Chicago’s Best Landlord, Inc.
A 12-flat in West Garfield Park, Chicago.

I’m halfway done reading the book, “Family Properties: How the Struggle Over Race and Real Estate Transformed Chicago and Urban America“, by Beryl Satter, and I’ve started keeping a list of all the places she mentions.

The book is about her father, and the history of racist housing policies and displacement in Chicago’s West Side. Mark Satter, who died of heart problems when he was 49, owned several properties in Lawndale and Garfield Park.

One of them was a 12-flat in West Garfield Park at 3901 W Jackson Blvd, one block west of the eponymous park. Mark died in 1965, and the properties reverted to a trust he had set up, controlled by his wife, Clarice. Mired in debt to contractors and the mortgage, Clarice had family friend, Irving Block, sell the four properties.

3901 W Jackson Blvd eventually became the property of L.J. Epstein, a known slumlord, in 1967. (It’s unclear in the book, and online deed records don’t go that far back, if it was owned by someone else between Clarice selling it and L.J. acquiring it.)

Less than two years later, though, because of L.J.’s neglect of the building, Judge Kral placed it in receivership because of the activism of Ruby Kirk, a Black tenant from Tennessee.

The judge made Ruby the receiver, because the judge perceived that she cared about the building more than the owner. Ruby created a non-profit corporation on Monday, September 22, 1969, called “Chicago’s Best Landlord, Inc.” (It was involuntarily dissolved on February 1, 2003.)

Beryl wrote that during the remodel the corporation’s name was engraved into the stone above the front door. Along with the initials of the tenants at the time. As I was reading it I thought that stone had to still be there, and since I was biking over to Garfield Park for some hot air balloon training with a couple friends, I would stop by.

Ruby was also successful in pressing First National Bank of Chicago to issue a loan to her, a Black woman, when banks refused to lend to Black people as the Federal Housing Administration refused to insure the loans. If you’re from around here, you may remember that First National Bank merged with Banc One to become Bank One, which then became part of Chase. Chase is the second largest loan issuer in Chicago, and still has a “hard time” lending to Black people. It was also then, however, that the federal rules changed and FHA would insure mortgages in Black neighborhoods.

3901 W Jackson Blvd / Chicago’s Best Landlord, Inc.
Enjoy this little piece of Chicago history -> book -> real-life-right-now photo. Ruby Kirk’s initials are in the center.

Chicago’s Best Landlord, Inc. owned the 12-flat from 1969 until 2001, when the U.S. Department of Housing and Urban Development (HUD) foreclosed the property. First National Bank had assigned the mortgage to HUD in 1976.

Peter Gritzanis purchased the property for $270,000, according to the deed recorded on June 28, 2001, at the Cook County Recorder of Deeds office. It looks like Peter stopped owning it in 2012.