docs.daveops.net

Snippets for yer computer needs

SQL

To add a row

INSERT INTO table_name (column1, column2, column3,...)
VALUES (value1, value2, value3,...)

To update a row

UPDATE table_name
SET column1=value, column2=value2,...
WHERE some_column=some_value

To sort a table

SELECT row1, row2
FROM table
ORDER BY row2 (ASC|DESC)

Delete rows

DELETE FROM table_name
WHERE some_column=some_value

Search in fields

SELECT *
FROM Persons
WHERE City LIKE '%ville'

Conditional statements

CASE WHEN condition THEN result
[WHEN ...]
[ELSE result]
END

Format a date

SELECT DATE_FORMAT(`date`,'%Y-%m-%d') AS showdate 
FROM table

Retrieve records within 90 days of stamp

FROM stockserialitems
WHERE expirationdate < utc_timestamp() + interval 90 day

Check for duplicate rows

SELECT a, b, count(*) cnt 
FROM table
GROUP BY a, b 
HAVING cnt > 1
ORDER BY cnt asc;

Standards

SQLite

Show all tables

.tables

Show table schema


.schema tablename

See if table exists

SELECT name
FROM sqlite_master
WHERE type='table'

Export tables

sqlite3 my.db .dump

Get table sizes

SELECT SUM("pgsize") FROM "dbstat" WHERE name='TABLENAME';

Datatypes

PostgreSQL

Connect to a shell

psql

List databases

$ psql -l

# in psql shell \l

Configuration

/var/lib/pgsql/data/postgresql.conf

View current queries

SELECT * FROM pg_stat_activity ;

Killing long running queries

Don’t use SIGKILL, it will shut down the rest of PostgreSQL, and require a replay

pg_cancel_backend(pid int) 
pg_terminate_backend(pid int)

Tuning

config set
shared_buffers About 1/4 of the memory in the system
effective_cache_size should be ~ 2/3 of available RAM
work_mem Depends on maxconnections
maintenance_work_mem Used for vacuum, etc. “256 MB is reasonable”

Dealing with table bloat

When you update a table in PostgreSQL, deleting or updating a row leaves behind old rows that were part of the transaction. To reclaim the space and prevent table bloat, you’ll need to vacuum the database.

From 8.1 on, there’s an autovacuum daemon which can be tuned. Have it aggressively vacuum high-frequency tables.

For really bad table bloat, use CLUSTER (not VACUUM FULL). It requires space for the in-use data while it runs, but is faster and prevents bloated indexes.

Dump database to tar file

pg_dump -O -x -F t -b -f [filename.tar] [db name]

Restore database from tar file

pg_restore -F t -d [db name] [filename.tar]

Create database

create database

Reindexing

REINDEX { INDEX | TABLE | DATABASE | SYSTEM } name

Handy URLs