Skip to content

Unleashing the Power of Python and PostgreSQL: Optimizing Data Processing with similarity matching

In today’s data-driven world, the ability to match and compare strings effectively is an invaluable skill. Python, a powerful and intuitive programming language, provides numerous methods for similarity matching. One such method is the Jaro-Winkler algorithm, which is commonly used in record linkage and data deduplication.

The Jaro-Winkler method is particularly effective for comparing small strings and has built-in accommodations for comparisons of longer strings. It takes into account the position and order of characters, making it more accurate than simpler comparison methods.

To leverage the Jaro-Winkler method in Python, we can use the `recordlinkage` library, which provides an implementation of this algorithm. By following a step-by-step process of data preparation, function implementation, and result interpretation, we will guide you on how to efficiently perform similarity matching using Python and the `recordlinkage` library.

def compare_and_link(df, engine):
    # Prepare the query to get data from the database
    query = """
        SELECT normalised_agency_name, "Agent Address", id
        FROM details;
    """
    # Execute the query and store the result in a DataFrame
    db_data = pd.read_sql(query, engine)
    
    # Iterate through each row in the DataFrame
    for index, row in df.iterrows():
        # Create an indexer object
        indexer = recordlinkage.Index()
        indexer.full()
        
        # Create a comparison object
        compare = recordlinkage.Compare()
        compare.string('normalised_agency_name', 'normalised_agency_name', method='jarowinkler', threshold=0.85)
        compare.string('Agent Address', 'Agent Address', method='jarowinkler', threshold=0.85)
        
        # Perform the comparison
        features = compare.compute(pd.MultiIndex.from_product([[index], db_data.index]), pd.DataFrame(row).T, db_data)
        
        # Find matches and non-matches
        matches = features[features.sum(axis=1) > 1].index
        if len(matches) > 0:
            matched_id = db_data.loc[matches[0][1], 'id']
            # Insert the new row with the matched ID
            df.loc[index, 'id'] = matched_id
        else:
            # Get the max ID from the database and increment it
            max_id = db_data['id'].max()
            max_id = max_id if pd.notnull(max_id) else 0
            new_id = max_id + 1
            # Insert the new row with the incremented ID
            df.loc[index, 'id'] = new_id
    
    # After processing all rows, insert the updated DataFrame into the database
    df.to_sql('details', engine, if_exists='append', index=False)

The `compare_and_link` function above is a comprehensive implementation of the Jaro-Winkler method for data matching in Python using the `recordlinkage` library. The function begins by preparing and executing a query to retrieve data from a specified database, storing the results in a DataFrame named `db_data`.

Subsequently, it iterates through each row in the dataframe `df`. On each iteration, it creates an indexer object and a comparison object. The comparison object uses the Jaro-Winkler method for matching, applied to two fields: ‘normalised_agency_name’ and ‘Agent Address’. A threshold of 0.85 is used to determine a match.

Then, the comparison is performed, resulting in a set of features. The function finds matches where the sum of the features across axis 1 (columns) is greater than 1. If any matches are found, the `id` from the corresponding `db_data` row is added to the `df` row. If no matches are found, the function increments the maximum `id` from `db_data` and adds this new `id` to the `df` row.

Finally, after all rows have been processed, the function inserts the updated `df` DataFrame into the database, appending to the existing ‘details’ table. This function is a powerful tool for identifying and linking similar data entries, contributing to the efficiency and accuracy of data management tasks.

However, when it comes to performing these operations in a database, PostgreSQL’s `pg_trgm` extension comes in handy. The `pg_trgm` module provides functions and operators for determining the similarity of alphanumeric text based on trigram matching, as well as index operator classes that support fast searching for similar strings.

Lets take a look:

CREATE EXTENSION IF NOT EXISTS pg_tr

The line `CREATE EXTENSION IF NOT EXISTS pg_trgm;` is an SQL command used within PostgreSQL, a robust and versatile open-source relational database. This command ensures the activation of the `pg_trgm` extension if it is not already enabled in the current database. The `pg_trgm` extension is integral for enabling text comparison operations. It works on the principle of trigram matching, a method of text comparison that divides strings into triads of characters, enabling the detection of similarities between different pieces of text. Thus, this line not only ensures the availability of crucial text comparison functions but also enhances the performance and utility of PostgreSQL databases in managing, analyzing, and manipulating text data.

CREATE OR REPLACE FUNCTION normalize_agency_name(agency_name text)
RETURNS text LANGUAGE plpgsql AS $$
BEGIN
    -- Your normalization logic here
    RETURN lower(trim(agency_name)); -- This is a simplified example
END;
$$;

The code block presented above demonstrates the creation of a PostgreSQL function that normalizes agency names. The function, named `normalize_agency_name`, takes a single argument, `agency_name`, of the text data type. Once invoked, this function executes normalization logic on the `agency_name`, although this specific example simply lowers the case and trims any whitespace from the input text.

The line `CREATE OR REPLACE FUNCTION normalize_agency_name(agency_name text)` initiates the creation or replacement of the function in PostgreSQL. The `RETURNS text` clause signifies that the function will output text data type.

The function uses Procedural Language/PostgreSQL (`LANGUAGE plpgsql`) for its internal operations, as indicated by the clause `LANGUAGE plpgsql`. The function’s logic is contained within the `BEGIN` and `END;` statements.

In this case, the function logic is fairly simple: it takes the agency name, trims any leading or trailing whitespace, and converts all characters to lowercase. The result is then returned by the function. This function is pivotal for ensuring consistency and eliminating potential discrepancies in the database caused by varied text input formats. In essence, this function increases data integrity and facilitates more reliable data manipulation and analysis.

Finally we add this function:

CREATE OR REPLACE FUNCTION compare_and_link()
RETURNS TRIGGER LANGUAGE plpgsql AS $$
DECLARE
    matched_id integer;
    max_id integer;
BEGIN
    -- Look for a matching record
    SELECT id INTO matched_id
    FROM demo
    WHERE
        similarity(normalize_agency_name(NEW.normalised_agency_name),         normalize_agency_name(normalised_agency_name)) > 0.7
        AND similarity(NEW."Agent Address", "Agent Address") > 0.7;
    IF FOUND THEN
        -- If a match is found, set the ID to the matched ID
        NEW.id := matched_id;
    ELSE
        -- If no match is found, increment the ID
        SELECT COALESCE(MAX(id), 0) + 1 INTO max_id FROM details;
        NEW.id := max_id;
    END IF;
    RETURN NEW;
END;
$$;

The above function, `compare_and_link()`, plays a pivotal role in maintaining data integrity, facilitating seamless data analysis, and ensuring accurate record-linkage in the database. It is a trigger function written in Procedural Language/PostgreSQL (`LANGUAGE plpgsql`). The function’s logic operates in a specific way: it first attempts to find a record in the `demo` table that matches the incoming `NEW` record. The match is determined based on two fields – `normalised_agency_name` and `Agent Address` – using a similarity function.

This function measures the similarity between the new and existing records, where a score greater than 0.7 (on a scale from 0 to 1) is deemed a match. If a match is found, the new record adopts the ID of the matched existing record, thereby linking them together. In cases where no match is found, the function increments the maximum `id` value in the `details` table by 1 and assigns this new `id` to the incoming record.

Essentially, this function fosters consistency in the database and aids in accurate linkage of related records, thereby bolstering the reliability of data manipulation and analysis.

So this all works as a trigger implemented as such:

CREATE TRIGGER compare_and_link_trigger
BEFORE INSERT ON demo
FOR EACH ROW
WHEN (NEW.id IS NULL)
EXECUTE FUNCTION compare_and_link();

For every new row, the function is executed, adding a new ID to the row. This ID can either be the match’s ID or a unique ID.

insert into demo ("Agent Company Name", normalised_agency_name, "Agent Address") VALUES ('Test corp1', 'Test corp1', '10 Downing Street');
insert into demo ("Agent Company Name", normalised_agency_name, "Agent Address") VALUES ('Test corp', 'Test corp', '10 Downing Street');
insert into demo ("Agent Company Name", normalised_agency_name, "Agent Address") VALUES ('Spicule LTD', 'Spicule LTD', '10 My Road');
planning=> select "Agent Company Name", normalised_agency_name, id from demo where normalised_agency_name like 'Test corp%' or normalised_agency_name = 'Spicule LTD';
Agent Company Name | normalised_agency_name | id
-------------------+------------------------+------
Test corp | Test corp | 895
Test corp1 | Test corp1 | 895
Spicule LTD | Spicule LTD | 1086

As seen in the aforementioned output, I have included two additional rows that are mostly identical, except for the presence of a “1” in the name field. Surprisingly, despite this distinction, both rows resulted in the same ID being assigned. Furthermore, I introduced another row with entirely distinct values, which in turn received a new ID due to its lack of similarity with the preceding rows.

Combining Python’s Jaro-Winkler method with PostgreSQL’s `pg_trgm` extension can lead to powerful and efficient data processing pipelines. Stay tuned for a deep dive into these exciting topics.

Leave a Reply

Your email address will not be published. Required fields are marked *