Federated engine – MySQL table as symlink

db, development Comments Off on Federated engine – MySQL table as symlink

Are you aware of Federated engine in MySQL (apart from MyISAM and InnoDB)?

This engine allows you to define a table that sucks data from another table, even from a remore server. The tables definition must be the same.

I use it for the following:

  1. Every time I rebuild the project, I have wait for 15 minutes while two big tables are created and filled with data — these are geo data tables (world cities, regions, etc), 4 mln records, and POI table, 2 mln records. I use Federated tables to create two separate databases and just link these tables in my project.
  2. These tables are shared between several environments (dev, test and live) on the same server.

To check if your MySQL server has the Federated engine supported, you can use just a phpMyAdmin — go to home page of you phpMyAdmin installation (click Home picture), then choose Engines tab and check there.

If it’s not enabled (gray), open your my.ini file, find the “[mysqld]” part and make it to look like this:

[mysqld]
federated

P.S. If you have an error in the table definition, phpMyAdmin shows your database as empty. To fix this, log in via mysql console and try to make a SELECT from this poorly defined table and you get the error message to work with.

Currency exchange in your application

db, development, ideas, php, zend 1 Comment »

That’s easy, you need 2 things:

  1. Fresh currencies exchange rates
  2. Some way to excange amount from one currency to another.

This how I did it: get values from European Central Bank (ECB) for step #1 and wrote MySQL user defined function for step #2.

Here is how to export currencies rates from ECB (EUR is a base currency, and I add self rate as 1:1). First I create such database table:

CREATE TABLE IF NOT EXISTS `currency` (
  `code` char(3) NOT NULL DEFAULT '',
  `rate` decimal(10,5) NOT NULL COMMENT 'Rate to EUR got from www.ecb.int',
  PRIMARY KEY (`code`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='Currency rates (regularly updated)';

Now let’s fill it with rates:

<?php

class CurrencyController extends Controller_Ajax_Action {

  public function importAction() {
  
    $db = Zend_Registry::get('db');
    $db->beginTransaction();
    
    $url = 'http://www.ecb.int/stats/eurofxref/eurofxref.zip?1c7a343768baab4322620e3498553b5a';
    try {
      $contents = file_get_contents($url);
      $contents = archive::unzip($contents);
      $contents = explode("\n", $contents);
      
      $names = explode(',', $contents[0]);
      $rates = explode(',', $contents[1]);
      
      $names[] = 'EUR';
      $rates[] = 1;
    
      for ($i = 1; $i < sizeof($names); $i++) {
        if (!(float) $rates[$i]) continue;
        $db->query( sprintf('INSERT INTO `currency`(`code`, `rate`)
              VALUES ("%s", %10.5f)
              ON DUPLICATE KEY UPDATE `rate`=VALUES(`rate`)', 
             trim( $names[$i] ), 
             trim( $rates[$i] )
        ) );
      }
      
      $db->commit();
    } catch ( Exception $O_o ) {
      error_log( $O_o->getMessage() );
      $db->rollback();
    }
    
  }
}

Now let’s create a SQL function for handy converts. Create a udf.sql file and add this in it:

DELIMITER //

DROP  FUNCTION IF EXISTS EXCHANGE;
CREATE FUNCTION EXCHANGE( amount DOUBLE, cFrom CHAR(3), cTo CHAR(3) ) RETURNS DOUBLE READS SQL DATA DETERMINISTIC
    COMMENT 'converts money amount from one currency to another'
BEGIN
    DECLARE rateFrom DOUBLE DEFAULT 0;
    DECLARE rateTo DOUBLE DEFAULT 0;
    
    
    SELECT `rate` INTO rateFrom FROM `currency` WHERE `code` = cFrom;
    SELECT `rate` INTO rateTo   FROM `currency` WHERE `code` = cTo;
    
    IF ISNULL( rateFrom ) OR ISNULL( rateTo ) THEN
        RETURN NULL;
    END IF;
    
    RETURN amount * rateTo / rateFrom;
END; //

DELIMITER ;

and run this command in your shell:

mysql --user=USER --password=PASS DATABASE < udf.sql

This is how you can use this function — how to convert 10 US dollars to Canadian dollars:

SELECT EXCHANGE( 10, 'USD', 'CAD')

which results in $10 = 10.93 Canadian dollars.

P.S. Consider adding the currency export action call to your cron scripts.

P.P.S. A function to unzip the data file can be got at php.net

XML to CSV conversion

db, development, ideas Comments Off on XML to CSV conversion

MySQLData feeds often come in XML format, so your application must be able to deal with that format.

As I already wrote, data in CSV format (comma separated values) can be loaded to database extremely fast. So my idea was to convert XML data files to CSV and then use bulk load to database. My tests shown that this is faster in 10-100 times than one by one inserts.

Yesterday I decided to write a generalized solution for this, and it turned out that there is no need: it’s just coming — MySQL 6 will have such feature!

How it works: you create a table, name its columns exactly as XML nodes/attributes names or — and MySQL server will load it correspondently.

Example — you downloaded a POI list file (Points of Interest) called poi.xml that looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<gpx>
    <wpt lat="58.691931900" lon="11.253125962">
        <name>Parking</name>
    </wpt>
    <wpt lat="58.315525000" lon="12.305828000">
          <name>Fast Food Restaurant:Max i trollhattan</name>
    </wpt>
    <wpt lat="57.717958100" lon="11.880860600">
          <name>Picnic spot</name>
    </wpt>
</gpx>

You create a MySQL table:

CREATE TABLE IF NOT EXISTS `poi` (
  `lat` varchar(255) NOT NULL,
  `lon` varchar(255) NOT NULL,
  `name` varchar(255) NOT NULL
) DEFAULT CHARSET=utf8;

OK, now you load the XML data to your table:

LOAD XML INFILE '\\path\\to\\poi.txt'
INTO TABLE `poi`
ROWS IDENTIFIED BY '<wpt>'

Voila!

The good thing is that MySQL 6 is already available in alpha version — good enough for development purposes; I gave it a try — it takes 5 seconds to load 4.8 Mb of data in 19 files.

Sharing work between agents

db, development, ideas, php Comments Off on Sharing work between agents

There are situations when you need to separate processing of big amount of data between several “agents”, e.g.:

  • you have a long list of websites which must be checked for being alive (404 error check) by your web-clawlers;
  • a queue of photos to be resized or videos to be converted;
  • articles that your editors must review;
  • catalogue of blog feeds that your system must import posts from;
  • etc.

The idea to do this is simple:

  1. Give a small piece of big work to an agent.
  2. Mark this piece as given to him (so that none other starts to do the same job) and remember the time stamp when the job was given or when the job becomes obsolete (this agent is dead, let’s give this job to someone else).
  3. If work is done — go to step #1.
  4. After some period of time (1 hour) check all the time stamps, and if some agents didn’t cope with the job, mark the jobs as free so that others could start to work on it.

The problem is between steps #1 and #2 — while you gave a job to Agent 1 and going to mark it as given to him, what if Agent 2 is given by the same job? If you have many Agents, this  can happen at real. This situations is called concurrent read/write.

To overcome this a lock can be used.

In this article I wil explain, how to use locks in Zend project with MySQL database.

First of all, MySQL documentation tells that SELECT .. FOR UPDATE can be used for that purpose. First step is to select records by that statement, and second step is to mark them as locked. Requirements are to use InnoDB storage and to frame these two statements in a transaction.

Happily, Zend_Db_Table_Select has a special method forUpdate() that implements SELECT .. FOR UPDATE statement. Zend_Db can cope with transactions as well. Let’s try it!

To lock a record, we need two fields:

  1. one to remember ID of agent that is processing this record (let’s call this column ‘locked_by‘)
  2. one another to know the time when the lock becomes obsolete (let’s call this column ‘expires_at‘)

I wrote a  class that inherits from Zend_Db_Table and helps to get records with locking them.

<?php

class Koodix_Db_Table_Lockable extends Zend_Db_Table
{
    protected $_lockedByField = 'locked_by';
    protected $_expiresAtField = 'expires_at';
    protected $_TTL = '1 HOUR'; //time to live for lock
    
    public function fetchLocked( Zend_Db_Table_Select $select, 
        $lockerID ) {
        
        $db = $this->getAdapter();
        $db->beginTransaction();
        
        $column = $db->quoteIdentifier( $this->_lockedByField );
        $select->forUpdate()
             ->where("$column=? OR $column IS NULL", $lockerID);


        $data = $this->fetchAll($select);
        if( empty($data) ) return null;
        
        $expiresAt = new Zend_Db_Expr('DATE_ADD( NOW(), 
            INTERVAL ' . $this->_TTL . ')');
        if( sizeof($this->_primary) > 1 ) {
            foreach( $data as $item ) {
                $item->{$this->_lockedByField} = $lockerID;
                $item->{$this->_expiresAtField} = $expiresAt;
                    
                $item->save();
            }
        }
        else {
            $arrIds = array();
            foreach( $data as $item ) {
                $arrIds[] = $item->id;
            }
            
            $this->update(
                array(
                    $this->_lockedByField => $lockerID,
                    $this->_expiresAtField => $expiresAt,
                ), 
                $db->quoteIdentifier(current($this->_primary)) . 
                    ' IN ("'.implode('","', $arrIds).'")'
            );
        }
        
        $db->commit();
        return $data;
    }

    public function releaseLocks( ) {
        
        $column = $db->quoteIdentifier( $this->_expiresAtField );
        
        return $this->update(
            array(
                $this->_lockedByField => null,
                $this->_expiresAtField => null,
            ), 
            "$column <= NOW()"
        );
        ;
    }
}

If the table has a composite primary key (containing more than one column), the ActiveRecord approach is used, so the save() method for every record is called, that’s simple (drawback — multiple update queries). Otherwise, if it is a deep-seated table with one ID column as a primary key, then the IDs are collected in a list and all records are updated by a single statement with IN in where clause (which is much faster).

TTL (‘Time to Live‘) — period of time when lock is allowed. In my application the default is one hour. Format of TTL can be seen in MySQL documentation.

And now how to use it.

Let’s imagine you have several editors that divide the big articles list and review them. My model class has a method fetchForUser() that returns no more than 5 articles for current user (by given user ID).

This is an Article table model, inherited from the class above. Usually such classes are located at

application/default/models/ArticleTable.php
<?php
class ArticleTable extends Koodix_Db_Table_Lockable
{
    protected $_name = 'article';
    
    public function fetchForUser( $userId, $count=5 ) {
        
        $select = $this->select()
            ->where('reviewed = 0')
            ->order('expires_at DESC')
            ->order('date_imported DESC')
            ->limit( $count );
        
        return $this->fetchLocked($select, $userId);
    }
}

Note: if the editor refreses the page, the expres_at fields is refreshed by current time as well.

As for step four of our algorithm (releasing all obsolete locks) — create an action in your backend controller, call your table model releaseLocks() method in it and call that action periodically by Cron.

To boost the performance of the lock releasing, create an index on the expires_at column. (Because of this reason I rejected the ‘locked_since‘ column in favor of ‘expires_at‘)

P.S. In my database date/time columns have DATETIME type. If you use INT to store timestamps, convert it to unix time and back.

32 tips to speed up your queries

db 1 Comment »

As certified MySQL developer (yes, I’m listed on mysql.com!!! 🙂 ), I would like to share some experience I’ve got during training to the certification. Today I will tell you how to speed up your queries.

  1. Use persistent connections to the database to avoid connection overhead.
  2. Check all tables have PRIMARY KEYs on columns with high cardinality (many rows match the key value). Well,`gender` column has low cardinality (selectivity), unique user id column has high one and is a good candidate to become a primary key.
  3. All references between different tables should usually be done with indices (which also means they must have identical data types so that joins based on the corresponding columns will be faster). Also check that fields that you often need to search in (appear frequently in WHERE, ORDER BY or GROUP BY clauses) have indices, but don’t add too many: the worst thing you can do is to add an index on every column of a table 🙂 (I haven’t seen a table with more than 5 indices for a table, even 20-30 columns big). If you never refer to a column in comparisons, there’s no need to index it.
  4. Using simpler permissions when you issue GRANT statements enables MySQL to reduce permission-checking overhead when clients execute statements.
  5. Use less RAM per row by declaring columns only as large as they need to be to hold the values stored in them.
  6. Use leftmost index prefix — in MySQL you can define index on several columns so that left part of that index can be used a separate one so that you need less indices.
  7. When your index consists of many columns, why not to create a hash column which is short, reasonably unique, and indexed? Then your query will look like:
    SELECT *
    FROM table
    WHERE hash_column = MD5( CONCAT(col1, col2) )
    AND col1='aaa' AND col2='bbb';
  8. Consider running ANALYZE TABLE (or myisamchk --analyze from command line) on a table after it has been loaded with data to help MySQL better optimize queries.
  9. Use CHAR type when possible (instead of VARCHAR, BLOB or TEXT) — when values of a column have constant length: MD5-hash (32 symbols), ICAO or IATA airport code (4 and 3 symbols), BIC bank code (3 symbols), etc. Data in CHAR columns can be found faster rather than in variable length data types columns.
  10. Don’t split a table if you just have too many columns. In accessing a row, the biggest performance hit is the disk seek needed to find the first byte of the row.
  11. A column must be declared as NOT NULL if it really is — thus you speed up table traversing a bit.
  12. If you usually retrieve rows in the same order like expr1, expr2, ..., make ALTER TABLE ... ORDER BY expr1, expr2, ... to optimize the table.
  13. Don’t use PHP loop to fetch rows from database one by one just because you can 😉 — use IN instead, e.g.
    SELECT *
    FROM `table`
    WHERE `id` IN (1,7,13,42);
  14. Use column default value, and insert only those values that differs from the default. This reduces the query parsing time.
  15. Use INSERT DELAYED or INSERT LOW_PRIORITY (for MyISAM) to write to your change log table. Also, if it’s MyISAM, you can add DELAY_KEY_WRITE=1 option — this makes index updates faster because they are not flushed to disk until the table is closed
  16. Think of storing users sessions data (or any other non-critical data) in MEMORY table — it’s very fast.
  17. For your web application, images and other binary assets should normally be stored as files. That is, store only a reference to the file rather than the file itself in the database.
  18. If you have to store big amounts of textual data, consider using BLOB column to contain compressed data (MySQL’s COMPRESS() seems to be slow, so gzipping at PHP side may help) and decompressing the contents at application server side. Anyway, it must be benchmarked.
  19. If you often need to calculate COUNT or SUM based on information from a lot of rows (articles rating, poll votes, user registrations count, etc.), it makes sense to create a separate table and update the counter in real time, which is much faster. If you need to collect statistics from huge log tables, take advantage of using a summary table instead of scanning the entire log table every time.
  20. Don’t use REPLACE (which is DELETE+INSERT and wastes ids): use INSERT ... ON DUPLICATE KEY UPDATE instead (i.e. it’s INSERT + UPDATE if conflict takes place). The same technique can be used when you need first make a SELECT to find out if data is already in database, and then run either INSERT or UPDATE. Why to choose yourself — rely on database side.
  21. Tune MySQL caching: allocate enough memory for the buffer (e.g. SET GLOBAL query_cache_size = 1000000) and define query_cache_min_res_unit depending on average query resultset size.
  22. Divide complex queries into several simpler ones — they have more chances to be cached, so will be quicker.
  23. Group several similar INSERTs in one long INSERT with multiple VALUES lists to insert several rows at a time: quiry will be quicker due to fact that connection + sending + parsing a query takes 5-7 times of actual data insertion (depending on row size). If that is not possible, use START TRANSACTION and COMMIT, if your database is InnoDB, otherwise use LOCK TABLES — this benefits performance because the index buffer is flushed to disk only once, after all INSERT statements have completed; in this case unlock your tables each 1000 rows or so to allow other threads access to the table.
  24. When loading a table from a text file, use LOAD DATA INFILE (or my tool for that), it’s 20-100 times faster.
  25. Log slow queries on your dev/beta environment and investigate them. This way you can catch queries which execution time is high, those that don’t use indexes, and also — slow administrative statements (like OPTIMIZE TABLE and ANALYZE TABLE)
  26. Tune your database server parameters: for example, increase buffers size.
  27. If you have lots of DELETEs in your application, or updates of dynamic format rows (if you have VARCHAR, BLOB or TEXT column, the row has dynamic format) of your MyISAM table to a longer total length (which may split the row), schedule running OPTIMIZE TABLE query every weekend by crond. Thus you make the defragmentation, which means more speed of queries. If you don’t use replication, add LOCAL keyword to make it faster.
  28. Don’t use ORDER BY RAND() to fetch several random rows. Fetch 10-20 entries (last by time added or ID) and make array_random() on PHP side. There are also other solutions.
  29. Consider avoiding using of HAVING clause — it’s rather slow.
  30. In most cases, a DISTINCT clause can be considered as a special case of GROUP BY; so the optimizations applicable to GROUP BY queries can be also applied to queries with a DISTINCT clause. Also, if you use DISTINCT, try to use LIMIT (MySQL stops as soon as it finds row_count unique rows) and avoid ORDER BY (it requires a temporary table in many cases).
  31. When I read “Building scalable web sites”, I found that it worth sometimes to de-normalise some tables (Flickr does this), i.e. duplicate some data in several tables to avoid JOINs which are expensive. You can support data integrity with foreign keys or triggers.
  32. If you want to test a specific MySQL function or expression, use BENCHMARK function to do that.

Some of these hints are unapplicable if you use a framework because direct queries are uninvited guests in the case: focus on competent database optimization — tune indexes and server parameters.

More on queries optimization:

WP Theme & Icons by N.Design Studio
Entries RSS Comments RSS Log in