EXPLAIN EXTENDED

How to create fast database queries

Archive for June 27th, 2009

Efficient INSERT ON DUPLICATE KEY UPDATE

Comments enabled. I *really* need your comment

From Stack Overflow:

I'm implementing a memory cache for a table in MySQL that looks like this:

item1 item2 cnt

The original table includes millions of pairs like this and updates rapidly.

To make it all more efficient I want to write new pairs to an identical memory table and update the real table on disk periodically by cron.

The cron should do the following: for each pair, if there is similar pair in the non-memory table, increase the count by the count from the memory table.

If no such pair exists, create it with count from the memory table.

How can I make the flush (from memory table to real table) most efficient?

This is of course best achieved with INSERT ON DUPLICATE KEY UPDATE.

It's a MySQL flavor of INSERT operator that automatically handles key violations and updates the row that causes a violation instead of inserting.

If we provide a SELECT statement as a row source for INSERT ON DUPLICATE KEY UPDATE, multiple input rows may cause the key violation on the target row. In this case, the target row will be updated many times.

To achieve the desired result, we have two options:

  1. Use the memory table as a row source for INSERT ON DUPLICATE KEY UPDATE as is. The target rows will be updated multiple times.
  2. Aggregate the counters in the memory table and use the aggregated values as the row source. The target rows will be updated at most once.

The first solution will update the rows many times (which can be slow), while the second one involves some overhead on GROUP BY.

Let's create the sample tables and see which one is more efficient:
Read the rest of this entry »

Written by Quassnoi

June 27th, 2009 at 11:00 pm

Posted in MySQL