Archive for July, 2009
Hierarchical data in MySQL: parents and children in one query
Answering questions asked on the site.
Michael asks:
I was wondering how to implement a hierarchical query in MySQL (using the ancestry chains version) for a single row, such that it picks up the parents (if any) and any children (if any).
The idea is, I want to be able to jump in at any point, provide an Id of some sort, and be able to draw out the entire hierarchy for that Id, both upwards and downwards.
We need to combine two queries here:
- Original hierarchical query that returns all descendants of a given
id
(a descendancy chain) - A query that would return all ancestors of a given
id
(an ancestry chain)
An id
can have only one parent
, that's why we can employ a linked list technique to build an ancestry chain, like shown in this article:
Here's the query to to this (no functions required):
SELECT @r AS _id, ( SELECT @r := parent FROM t_hierarchy WHERE id = _id ) AS parent, @l := @l + 1 AS lvl FROM ( SELECT @r := 1218, @l := 0, @cl := 0 ) vars, t_hierarchy h WHERE @r <> 0
To combine two queries, we can employ a simple UNION ALL
.
The only problem that is left to preserve the correct level
, since the ancestry chain query conts level
backwards, and the hierarchical query will count it starting from zero.
Let's create a sample table and see what we get:
Read the rest of this entry »
PostgreSQL 8.4: sampling random rows
On Jul 1, 2009, PostgreSQL 8.4 was released.
In this series of articles, I'd like to show how to reimplement some tasks I wrote about in the previous blog posts using new PostgreSQL features.
Other articles on new features of PostgreSQL 8.4:
Today, I'll show a way to sample random rows from a PRIMARY KEY preserved table.
Usually, if we need, say, 10 random rows from a table, we issue this query:
SELECT * FROM t_random ORDER BY RANDOM() LIMIT 10
PostgreSQL heavily optimizes this query, since it sees a LIMIT
condition and does not sort all rows. Instead, it just keeps a running buffer which contains at most 10 rows with the least values or RANDOM
calculated so far, and when a row small enough is met, it sorts only this buffer, not the whole set.
This is quite efficient, but still requires a full table scan.
This can be a problem, since the queries like that are often run frequently on heavily loaded sites (like for showing 10 random pages on social bookmarking systems), and full table scans will hamper performance significantly.
With new PosgreSQL 8.4 abilities to run recursive queries, this can be improved.
We can sample random values of the row id
s and use an array to record previously selected values.
Let's create a sample table and see how can we imrove this query:
Read the rest of this entry »
PostgreSQL 8.4: preserving order for hierarchical query
On Jul 1, 2009, PostgreSQL 8.4 was released.
In this series of articles, I'd like to show how to reimplement some tasks I wrote about in the previous blog posts using new PostgreSQL features.
Previously in the series:
Now, let's see how we can implement the hierarchical queries using the new features of PostgreSQL 8.4.
In PostgreSQL 8.3, we had to create a recursive function to do that. If you are still bound to 8.3 or an earlier version, you can read this article to see how to do it:
In 8.4, we have recursive CTE's (common table expressions).
Let's create a sample hierarchical table and see how can we query it:
Read the rest of this entry »
INNER JOIN vs. CROSS APPLY
From Stack Overflow:
Can anyone give me a good example of when
CROSS APPLY
makes a difference in those cases whereINNER JOIN
will work as well?
This is of course SQL Server.
A quick reminder on the terms.
INNER JOIN
is the most used construct in SQL: it joins two tables together, selecting only those row combinations for which a JOIN
condition is true.
This query:
SELECT * FROM table1 JOIN table2 ON table2.b = table1.a
reads:
For each row from
table1
, select all rows fromtable2
where the value of fieldb
is equal to that of fielda
Note that this condition can be rewritten as this:
SELECT * FROM table1, table2 WHERE table2.b = table1.a
, in which case it reads as following:
Make a set of all possible combinations of rows from
table1
andtable2
and of this set select only the rows where the value of fieldb
is equal to that of fielda
These conditions are worded differently, but they yield the same result and database systems are aware of that. Usually both these queries are optimized to use the same execution plan.
The former syntax is called ANSI syntax, and it is generally considered more readable and is recommended to use.
However, it didn't make it into Oracle until recently, that's why there are many hardcore Oracle developers that are just used to the latter syntax.
Actually, it's a matter of taste.
To use JOIN
s (with whatever syntax), both sets you are joining must be self-sufficient, i. e. the sets should not depend on each other. You can query both sets without ever knowing the contents on another set.
But for some tasks the sets are not self-sufficient. For instance, let's consider the following query:
We have
table1
andtable2
.table1
has a column calledrowcount
.For each row from
table1
we need to select firstrowcount
rows fromtable2
, ordered bytable2.id
We cannot come up with a join condition here. The join condition, should it exist, would involve the row number, which is not present in table2
, and there is no way to calculate a row number only from the values of columns of any given row in table2
.
That's where the CROSS APPLY
can be used.
CROSS APPLY
is a Microsoft's extension to SQL, which was originally intended to be used with table-valued functions (TVF's).
The query above would look like this:
SELECT * FROM table1 CROSS APPLY ( SELECT TOP (table1.rowcount) * FROM table2 ORDER BY id ) t2
For each from
table1
, select firsttable1.rowcount
rows fromtable2
ordered byid
The sets here are not self-sufficient: the query uses values from table1
to define the second set, not to JOIN
with it.
The exact contents of t2
are not known until the corresponding row from table1
is selected.
I previously said that there is no way to join these two sets, which is true as long as we consider the sets as is. However, we can change the second set a little so that we get an additional computed field we can later join on.
The first option to do that is just count all preceding rows in a subquery:
SELECT * FROM table1 t1 JOIN ( SELECT t2o.*, ( SELECT COUNT(*) FROM table2 t2i WHERE t2i.id <= t2o.id ) AS rn FROM table2 t2o ) t2 ON t2.rn <= t1.rowcount
The second option is to use a window function, also available in SQL Server since version 2005:
SELECT * FROM table1 t1 JOIN ( SELECT t2o.*, ROW_NUMBER() OVER (ORDER BY id) AS rn FROM table2 t2o ) t2 ON t2.rn <= t1.rowcount
This function returns the ordinal number a row would have, be the ORDER BY
condition used in the function applied to the whole query.
This is essentially the same result as the subquery used in the previous query.
Now, let's create the sample tables and check all these solutions for efficiency:
Oracle: OR on multiple EXISTS clauses
Comments enabled. I *really* need your comment
From Stack Overflow:
I have two queries, and I want to understand which is better in terms of performance and memory:
SELECT DISTINCT a.no, a.id1, a.id2 FROM tbl_b b, tbl_a a , tbl_c c, tbl_d d WHERE ( b.id1 = a.id1 AND a.id1 = c.id1 AND UPPER(c.flag) = 'Y' AND c.id1 = d.id1 ) OR ( b.id2 = a.id2 AND a.id2 = c.id2 AND UPPER(c.flag) = 'Y' AND c.id2 = d.id2 ) AND d.id3 = 10and
SELECT DISTINCT a.no, a.id1, a.id2 FROM tbl_a a WHERE EXISTS ( SELECT a.id1, a.id2 FROM tbl_c c WHERE (a.id1 = c.id1 OR a.id2 = c.id2) AND UPPER(c.flag) = 'Y' ) AND EXISTS ( SELECT a.id1, a.id2 FROM tbl_b b WHERE b.id1 = a.id1 OR b.id2 = a.id2 ) AND EXISTS ( SELECT a.id1, a.id2 FROM tbl_d d WHERE (a.id1 = d.id1 or a.id2 = d.id2) AND d.id3 = 10 )The tables
tbl_b
andtbl_d
are very large tables containing 500,000 to millions of rows, while tabletbl_a
is relatively small.My requirement is to pick up only those records from table
tbl_a
, whoseid
(eitherid1
orid2
) is available intbl_b
,tbl_c
, andtbl_d
tables, satisfying certain other conditions as well.Which is best performance-wise?
We can see that both these queries contain an OR
condition, a nightmare for most optimizers.
The first query uses a join on all four tables, concatenating the results and making a distinct set out of them.
The second query check each row in tbl_a
, making sure that the corresponding records exists in other tables in one or another way.
These queries are not identical: the first query will select the rows from tbl_a
matching all tables on same id (either three matches on id1
or three matches on id2
), while the second query returns rows matching on any id
This is, if we have a row matching tbl_b
and tbl_c
on id1
and tbl_d
on id2
, this row will be returned by the second query but not the first.
Both these queries will perform poorly on large tables. However, we can improve them.
Let's create the tables, fill them with sample data and make the improvements:
Read the rest of this entry »
Flattening timespans: PostgreSQL 8.4
On Jul 1, 2009, PostgreSQL 8.4 was released.
Among other imporvements, this version supports window functions, recursive queries and common table expressions (CTE's).
Despite being a minor release (accoring to the version numeration), this version can become quite a milestone, since these features make developer's life much, much easier.
Let's check how efficienly these features are implemented.
To do this, I'll take some tasks that I wrote about in the previous blog posts and try to reimplement them using new PostgreSQL features.
I'll start with quite a common task of flattening the intersecting timespans which I wrote about in this article:
This task requires calculating a running maximum and taking a previous record from a recordset, and therefore is a good illustration for window functions.
A quick reminder of the problem, taken from Stack Overflow:
I have lots of data with start and stop times for a given ID and I need to flatten all intersecting and adjacent timespans into one combined timespan.
To make things a bit clearer, take a look at the sample data for 03.06.2009:
The following timespans are overlapping or contiunous and need to merge into one timespan:
date start stop 2009.06.03 05:54:48:000 10:00:13:000 2009.06.03 09:26:45:000 09:59:40:000 The resulting timespan would be from 05:54:48 to 10:00:13.
Since there's a gap between 10:00:13 and 10:12:50, we also have the following timespans:
date start stop 2009.06.03 10:12:50:000 10:27:25:000 2009.06.03 10:13:12:000 11:14:56:000 2009.06.03 10:27:25:000 10:27:31:000 2009.06.03 10:27:39:000 13:53:38:000 2009.06.03 11:14:56:000 11:15:03:000 2009.06.03 11:15:30:000 14:02:14:000 2009.06.03 13:53:38:000 13:53:43:000 2009.06.03 14:02:14:000 14:02:31:000 which result in one merged timespan from 10:12:50 to 14:02:31, since they're overlapping or adjacent.
Any solution, be it SQL or not, is appreciated.
Let's create a sample table:
Read the rest of this entry »
SQL Server: aggregate bitwise OR
From Stack Overflow:
I am creating a script for merging and deleting duplicate rows from a table.
The table contains address information, and uses an integer field for storing information about the email as bit flags (column name
value
). For example, if bit 1 is set invalue
, that means the record is a primary address.There are instances of the same email being entered twice, but sometimes with different
value
s. To resolve this, I need to take thevalue
from all duplicates, assign them to one surviving record and delete the rest.My biggest headache so far as been with the merging of the records. What I want to do is bitwise
OR
allvalue
s of duplicate records together.
From database theory's point of view, this design of course violates the 1NF, since multiple properties are contained in one column (in bit-packed form). It would be easier to split them apart and create a separate column for each bit.
However, it can be a legitimate design if the fields are not parsed on the database side, but instead passed as-is to a client which needs them in this bit-packed form. And anyway, helping is better than criticizing.
We have three problems here:
- Select a first record for each set of duplicates
- Update this record with bitwise
OR
of all values in its set - Delete all other records
Step 1 is easy to do using ROW_NUMBER()
.
Step 3 is also not very hard. Microsoft has a knowledge base article KB139444 that described a really weird way to remove the duplicates, but it may be done much more easily using same ROW_NUMBER() with a CTE or an inline view.
See this article I wrote some time ago on how to do this:
Now, the main problem is step 2.
SQL Server lacks a native way to calculate bitwise aggregates, but with a little effort it can be emulated.
The main idea here is that for bit values, aggregate OR
and AND
can be replaced with MAX
and MIN
, accordingly.
All we need is to split each value into the bits, aggregate each bit and merge the results together.
Let's create a sample table:
Read the rest of this entry »
Double-thinking in SQL
One of the first things a novice SQL developer learns about is called thinking in SQL
, which is usually being opposed to procedural thinking
Let's see what part of brain does this intellectual activity take and how to use it.
Two features distinguish SQL from other languages you learned as a 11-year old kid on your first PC, like BASIC or perl or maybe even C++ if you're such a Wunderkind.
First, SQL is set-based. It does things with sets.
Every tool is designed to do things with something else. Like, you use a hammer to do things with nails, or use a screwdriver to do things with screws, or use an oven to do things with food.
Same with computer languages.
BASIC does things with variables. perl does things with scalars, arrays, hashes and file streams. Assembly does things with registers and memory.
You should not be confused by something like registers are just a special case of variables
, or a hash is just a generalized container which exposes this and this method
or something like that. No.
A hash is a hash, a variable is a variable and a register is a register.
Like, an egg is a food and rice is a food and it's possible to cook some eggs in a rice cooker and vice versa, but they are just wrong tools to do that.
Prehistoric men had to make do with hammerstones and lithic blades (even to court their women), but now we have a whole district in Tokyo City for gadgets with USB type A, and another district for gadgets with USB type B.
So if you feel the urge to hash something and then make a good old array out of this, you don't use assembly, but perl or PHP instead.
Same with SQL. SQL does things with sets.
It's a tool that allows you to take a dozen or two of sets, mix them together, knead and wedge them then chop them apart and mix again, but the output you get is still a set and all inputs are sets.
Eeverything you do in SQL, you do it on sets. That's why SQL is called a set-oriented language.
Ok, that was the first feature that distinguishes SQL from other languages. What's the second one?
SQL is a declarative language. This means that you express what
you want to do with sets, not how
you want to do it.
This requires a little explanation.
Read the rest of this entry »
Selecting compatible articles
Comments enabled. I *really* need your comment
From Stack Overflow:
I need to formulate an SQL query that returns all articles that are compatible to a set of other articles (of arbitrary size).
So for a list of article numbers
A, B,… , N
the question is:Give me all articles that are compatible with
A and B and … and N
For example, consider the table
A B 1 2 3 1 3 4 If I wanted all articles that are compatible with 1, the query would return (2, 3).
The query generated by the list (2, 3) will return 1, whilst the query generated from the list (1, 3) generates an empty list.
This table describes a friendship: a symmetric irreflexive binary relation.
That is:
- For any given
a
,b
, ifa
is a friend tob
, thenb
is a friend toa
- For any given
a
,a
is never a friend to itself
This relation is heavily used by social networks.
A normalized table describing this relation should be defined like this:
CREATE TABLE t_set (
a INT NOT NULL,
b INT NOT NULL
)
ALTER TABLE t_set ADD CONSTRAINT pk_set_ab PRIMARY KEY (a, b)
ALTER TABLE t_set ADD CONSTRAINT ck_set_ab CHECK (a < b)
[/sourcecode]
, the check being added to account for the relation symmetry.
Complete relation can be retrieved with the following query:
Read the rest of this entry »
Selecting birthdays
Comments enabled. I *really* need your comment
Answering questions asked on the site.
James asks:
I'm developing a forum and want to select all users that have a birthday within the next 3 days.
How do I do it?
This is in SQL Server
This is a very nice feature which every decent forum should have, and I'll be glad to answer this question.
Unfortunately you didn't provide the names of your tables, so I'll have to make them up.
It's not much of a stretch to assume that your table is called t_user
and you keep the user's birthdates in a DATETIME
field called birthdate
.
A birthday within next 3 days means that if you add the person's age to the person's birthdate, you get a date between the current date and three days after it.
To check this, we just need to calculate the number of months between the dates and make sure that it's divisible by 12 with reminder of 0 or 11 (to handle month transitions).
Then we need to add a transition month and divide the number of months by 12. The quotient will give us the number of years we need to add to the birthdate to compare the result with GETDATE()
.
Let's create a sample table and see how to do it:
Read the rest of this entry »