Skip to content
jtaylor-sfdc edited this page Feb 5, 2013 · 35 revisions

Phoenix is a SQL layer over HBase delivered as a client-embedded JDBC driver targeting low latency queries over HBase data. Phoenix takes your SQL query, compiles it into a series of HBase scans, and orchestrates the running of those scans to produce regular JDBC result sets. The table metadata is stored in an HBase table and versioned, such that snapshot queries over prior versions will automatically use the correct schema. Direct use of the HBase API, along with coprocessors and custom filters, results in performance on the order of milliseconds for small queries, or seconds for tens of millions of rows.

Mission

Become the standard means of accessing HBase data through a well-defined, industry standard API.

##SQL Support## To see what's supported, go to our language reference. It includes all typical SQL query statement clauses, including SELECT, FROM, WHERE, GROUP BY, HAVING, ORDER BY, etc. It also supports a full set of DML commands as well as table creation and versioned incremental alterations through our DDL commands. We try to follow the SQL standards wherever possible.

Here's a list of what is currently not supported:

  • Joins. Single table only currently.
  • Derived tables. Nested queries along with TopN queries are coming soon.
  • Relational operators. Union, Intersect, Minus.
  • Miscellaneous built-in functions. These are easy to add. Try it for yourself!

##Schema##

Phoenix supports table creation and versioned incremental alterations through DDL commands. The table metadata is stored in an HBase table.

A Phoenix table is created through the CREATE TABLE DDL command and can either be:

  1. built from scratch, in which case the HBase table and column families will be created automatically.
  2. mapped to an existing HBase table, by creating either a read-write TABLE or a read-only VIEW, with the caveat that the binary representation of the row key and key values must match that of the Phoenix data types (see Data Types reference for the detail on the binary representation).
    • For a read-write TABLE, column families will be created automatically if they don't already exist. An empty key value will be added to the first column family of each existing row to minimize the size of the projection for queries.
    • For a read-only VIEW, all column families must already exist. The only change made to the HBase table will be the addition of the Phoenix coprocessors used for query processing. The primary use case for a VIEW is to transfer existing data into a Phoenix table, since data modification are not allowed on a VIEW and query performance will likely be less than as with a TABLE.

All schema is versioned, and prior versions are stored forever. Thus, snapshot queries over older data will pick up and use the correct schema for each row.

Transactions

The DML commands of Phoenix (UPSERT VALUES, UPSERT SELECT and DELETE) batch pending changes to HBase tables on the client side. The changes are sent to the server when the transaction is committed and discarded when the transaction is rolled back. Phoenix does not providing any additional transactional semantics beyond what HBase supports when a batch of mutations is submitted to the server. If auto commit is turned on for a connection, then Phoenix will, whenever possible, execute the entire DML command through a coprocessor on the server-side, so performance will improve.

Most commonly, an application will let HBase manage timestamps. However, under some circumstances, an application needs to control the timestamps itself. In this case, a long-valued "CurrentSCN" property may be specified at connection time to control timestamps for any DDL, DML, or query. This capability may be used to run snapshot queries against prior row values, since Phoenix uses the value of this connection property as the max timestamp of scans.

Metadata

The catalog of tables, their columns, primary keys, and types may be retrieved via the java.sql metadata interfaces: DatabaseMetaData, ParameterMetaData, and ResultSetMetaData. For retrieving schemas, tables, and columns through the DatabaseMetaData interface, the schema pattern, table pattern, and column pattern are specified as in a LIKE expression (i.e. % and _ are wildcards escaped through the \ character). The table catalog argument to the metadata APIs deviates from a more standard relational database model, and instead is used to specify a column family name (in particular to see all columns in a given column family).

##Performance## For a comparison of Phoenix versus Hive, Impala, and OpenTSDB, see our Performance page.

##Roadmap## Our roadmap is driven by our user community. Other than adding miscellaneous built-in functions, some of the bigger ticket items under consideration include:

  1. Secondary Indexes. Allow users to create indexes through a new CREATE INDEX DDL command, and then, behind the scenes, build multiple projections of the table (i.e. a copy of the table using re-ordered or different row key columns). Phoenix will take care of maintaining the indexes when DML commands are issued and will choose the best table to use at query time.
  2. TopN Queries. Support a query that returns the top N rows, through support for derived tables and implementation of a server-side coprocessor that keeps the top N rows.
  3. IN Optimizations. When an IN (or the equivalent OR) appears in a query using the leading row key columns, compile it into a batched get to more efficiently retrieve the query results.
  4. COUNT DISTINCT. Although COUNT is currently supported, supporting COUNT DISTINCT will require returning more state to the client for the final merge operation.
  5. CREATE SEQUENCE. Surface the HBase put-and-increment functionality through the standard SQL sequence support.
  6. Dynamic Columns. For some use cases, it's difficult to model a schema up front. You may have columns that you'd like to specify only at query time. This is possible in HBase, in that every row (and column family) contains a map of values with keys that can be specified at run time. So, we'd like to support that.
  7. Nested Children. Unlike with standard relational databases, HBase allows you the flexibility of dynamically creating as many key values in a row as you'd like. Phoenix could leverage this by providing a way to model child rows inside of a parent row. The child row would be comprised of the set of key values whose column qualifier is prefixed with a known name and appended with the primary key of the child row. Phoenix could hide all this complexity, and allow querying over the nested children through joining to the parent row.
  8. Joins. Support hash joins first, where one side of the join is small enough to fit into memory.
  9. Schema evolution. Phoenix supports adding and removing columns through the [ALTER TABLE] (http://forcedotcom.github.com/phoenix/index.html#alter_table) DDL command, but changing the data type of, or renaming, an existing column is not yet supported.
  10. TABLESAMPLE. Implement a filter that uses a skip next hint based on the region boundaries of the table to only return n rows per region.
  11. OLAP extensions. Support the WINDOW, PARTITION OVER, RANK, etc. functionality.

We'd love to hear other ideas and have folks jump in and contribute. Join one of our Google groups and drop us a line. Or better yet, send us a Pull request.

##Building## Phoenix is a fully mavenized project. That means you can build simply by doing:

 $ mvn package

which will build, test and package Phoenix and put the resulting jars (phoenix-1.0.jar and phoenix-1.0-client.jar) in the generated target/ directory.

To build, but skip running the tests, you can do:

 $ mvn package -DskipTests

To only build the parser, you can do:

 $ mvn process-sources

##Developing## Use the m2e eclipse plugin and do Import->Maven Project and just pick the root 'phoenix' directory.

##Contributing## Join one or both of our Google groups:

Clone this wiki locally