5 of 11Image
To run a Query, type it in, then click the “RUN QUERY” button or just tap Ctrl-Enter on your keyboard. While the query is running, the query text area is disabled and the elapsed query time clock runs up, right next to the “Query running” label.
BigQuery does not permit “SELECT *”-style queries; instead, you must specify all column names. And although you’ll be querying large datasets, you will want to keep your result sets small. To do that, make use of aggregating queries (using aggregate functions and GROUP BY) and/or the LIMIT n clause at the end of your query as was done here (i.e. “LIMIT 200” appears at the end of the query).
Tables are identified using a syntax of datasetname.tablename. If you reference any table from the samples dataset, you’ll need to use the “publicdata:” prefix before the “samples” dataset name.
BigQuery isn’t just for big desktop and laptop computers. It runs very well on the iPad, for example, as shown here. And BigQuery is hip to tablet use too: rotate your iPad from landscape to portrait and the page rendering adjusts, showing you several extra rows of data (illustrated nicely here).
You can create your own tables too, of course. Just hover over your dataset and click the “+” sign that appears to the right of its name to bring up the “Create Table” form, shown above. In the form, you need only supply an ID (name) for the table, its schema (expressed as a list of column names and data types) and point to the source file containing the data. Then click OK.
If the file is 10MB or below in size, you can select it right from your own computer’s hard drive. If it’s bigger, you’ll need to push it up to Google Cloud Storage first, and then supply a link to the file, using “gs://” at the beginning of the URI, as seen here.
BigQuery assumes it will be importing from a CSV (comma separated values) text file with no initial row containing column names. If your file uses a non-comma delimiter, or its first row contains column names instead of data, you can tell BigQuery what delimiter to expect and to skip the first row (or first several rows).
This particular CSV file has historical baby name data for six states in the USA. The data was taken from the U.S. Social Security Administration’s namesbystate.zip data collection file, which contains data for all fifty states, each one in a separate file. The single file I built with the just the six states’ data nonetheless has over 1 million rows.