Print java stack trace from anywhere

Need to know which code calls a specific location? Dump the stack trace:

1
2
3
4
import org.apache.commons.lang3.exception.ExceptionUtils;

// ...somewhere in your method:
LOG.trace(ExceptionUtils.getStackTrace(new Throwable()));

You’re constructing a Throwable just to capture the current stack — you’re not throwing it. ExceptionUtils.getStackTrace turns the captured frames into a multi-line String that’s safe to hand to a logger. Make sure your log config has trace level enabled for whichever logger LOG belongs to, otherwise the line silently does nothing.


A few useful additions.

When this beats a breakpoint. You’d reach for this over a debugger when (a) you’re chasing a bug that only shows up in production, (b) the call happens in async / event-driven code where breakpoints are awkward, or (c) a method is called from many places and you want to know which path is firing. Sprinkle a few of these in, run the workload, then read the log.

No Apache Commons? Stdlib will do. ExceptionUtils lives in org.apache.commons.lang3, which is a separate dependency. If you don’t have it on the classpath you can fall back to plain JDK:

1
2
3
4
5
6
7
8
9
10
// Option 1 — quick and dirty, writes to stderr (not your logger):
new Throwable().printStackTrace();

// Option 2 — get the trace as a String you can log:
import java.io.StringWriter;
import java.io.PrintWriter;

StringWriter sw = new StringWriter();
new Throwable().printStackTrace(new PrintWriter(sw));
LOG.trace(sw.toString());

Java 9+: StackWalker is the modern API. If you actually want to inspect the frames programmatically (instead of just dumping a blob of text), use StackWalker — it’s lazy, so it doesn’t eagerly materialize every frame the way new Throwable().getStackTrace() does:

1
2
3
4
5
6
7
import java.lang.StackWalker;
import java.lang.StackWalker.StackFrame;
import java.util.stream.Collectors;

String trace = StackWalker.getInstance()
        .walk(s -> s.map(StackFrame::toString).collect(Collectors.joining("\n")));
LOG.trace(trace);

For a one-line debug print though, the original Apache Commons one-liner is still hard to beat. 🪵

Posted in java | Leave a comment

Knowing your exception class name

You’re staring at a generic catch (Exception e) and you don’t know which actual exception is being thrown. The trick is to log the runtime class so you can replace the generic catch with a specific one:

1
2
3
4
5
} catch (Exception e) {
    LOG.error("SQL error when processing " + requestId
        + ". Exception class: " + e.getClass().getCanonicalName());
    throw e;
}

Run the failing scenario, read the log, and you’ll see something like java.sql.SQLIntegrityConstraintViolationException — now you can write a specific catch for it.


A few useful additions.

Three flavours of class name. The Class object exposes three different name accessors and they behave differently for inner / anonymous classes:

  • getName() — JVM internal name. Inner classes show up with a dollar sign: com.acme.Outer$Inner.
  • getCanonicalName() — Java source-style name. Inner classes use a dot: com.acme.Outer.Inner. Returns null for anonymous and local classes.
  • getSimpleName() — just the leaf name, no package: Inner. Empty string for anonymous classes.

For diagnostics, getName() is usually safest because it never returns null. getCanonicalName() reads more naturally in logs but bites you the moment an exception comes from an anonymous class.

If you’re using SLF4J, you don’t need to format the exception yourself. Pass the throwable as the last argument and SLF4J logs the class name and the full stack trace for free:

1
2
3
4
} catch (Exception e) {
    LOG.error("SQL error when processing {}", requestId, e);
    throw e;
}

That single line gives you more diagnostic information than the original — no manual getClass() call needed.

Once you know the class, prefer multi-catch over a generic catch (Exception). Java 7+ supports listing several exception types in one block:

1
2
3
4
} catch (SQLIntegrityConstraintViolationException | DataAccessException e) {
    LOG.error("SQL error when processing {}", requestId, e);
    throw e;
}

This narrows what you’re handling, makes the intent obvious to readers, and lets your linter actually help you. The original “log the class name” trick is a great discovery tool — but once you’ve discovered, replace it with a specific catch. 🪲

Posted in java | Tagged | Leave a comment

SELinux directory permission

To check SELinux directory permission you need to -z for example

1
ls -Z /var/www/html

If something is incorrect you can re-adjust some of the directory permission:

1
chcon -R -t httpd_sys_content_t /var/www/html
Posted in Linux, Operating System | Leave a comment

RedHat / Centos Firewall

To add an exception to firewall
In RedHat/CentOS 6

1
2
3
4
5
iptables --line -vnL
iptables -A INPUT -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp -s 192.168.0.0/16 -j ACCEPT
iptables -D INPUT -p tcp -s 192.168.0.0/16 -j ACCEPT
service iptables save

In RedHat/CentOS 7

1
2
3
4
5
6
firewall-cmd --list-all
firewall-cmd --permanent --add-port=80/tcp
firewall-cmd --permanent --zone=public --add-source=192.168.0.0/16
firewall-cmd --permanent --zone=public --remove-source=192.168.0.0/16
firewall-cmd --reload
systemctl disable firewalld
Posted in Linux, Operating System | Leave a comment

Bash string comparison

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
#!/bin/bash

function test(){
echo ""
echo "TEST $1"
echo "VAR_1: $VAR_1 VAR_2: $VAR_2 "
if [ "$VAR_1" = "false" ]; then echo " VAR_1 is false"; fi
if [ "$VAR_2" = "false" ]; then echo " VAR_2 is false"; fi
if [ "$VAR_1" = "false" ] || [ "$VAR_2" = "false" ]; then echo " At least one is false"; fi
}
VAR_1='true';
VAR_2='true';
test 1

VAR_1='true';
VAR_2='false';
test 2

VAR_1='false';
VAR_2='false';
test 3

function test2(){
echo ""
echo "TEST $1"
echo "VAR_3: $VAR_3 VAR_4: $VAR_4 "
[ -n "$VAR_3" ] && echo " VAR_3 is not null"
[ -z "$VAR_3" ] && echo " VAR_3 is null"
[ -n "$VAR_4" ] && echo " VAR_4 is not null"
[ -z "$VAR_4" ] && echo " VAR_4 is null"
}

VAR_3=""
VAR_4=""
test2 4

VAR_3="3"
VAR_4=""
test2 5
<h4>Result</h4>
$ /c/tmp/bashtest.sh

TEST 1
VAR_1: true VAR_2: true

TEST 2
VAR_1: true VAR_2: false
VAR_2 is false
At least one is false

TEST 3
VAR_1: false VAR_2: false
VAR_1 is false
VAR_2 is false
At least one is false

TEST 4
VAR_3: VAR_4:
VAR_3 is null
VAR_4 is null

TEST 5
VAR_3: 3 VAR_4:
VAR_3 is not null
VAR_4 is null
Posted in Bash, Linux | Leave a comment

Show postgres lock

You’re staring at a query that won’t finish, or a deploy that hangs on a migration, or a UI request that just sits there. Postgres is almost certainly waiting on a lock that another transaction holds. The classic query for finding the culprit comes from the PostgreSQL wiki — it joins pg_locks against itself and against pg_stat_activity to show every blocked session and the session that’s blocking it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
SELECT blocked_locks.pid     AS blocked_pid,
       blocked_activity.usename  AS blocked_user,
       blocking_locks.pid    AS blocking_pid,
       blocking_activity.usename AS blocking_user,
       blocked_activity.query    AS blocked_statement,
       blocking_activity.query   AS current_statement_in_blocking_process
FROM pg_catalog.pg_locks         blocked_locks
JOIN pg_catalog.pg_stat_activity blocked_activity
  ON blocked_activity.pid = blocked_locks.pid
JOIN pg_catalog.pg_locks         blocking_locks
  ON blocking_locks.locktype       = blocked_locks.locktype
 AND blocking_locks.database       IS NOT DISTINCT FROM blocked_locks.database
 AND blocking_locks.relation       IS NOT DISTINCT FROM blocked_locks.relation
 AND blocking_locks.page           IS NOT DISTINCT FROM blocked_locks.page
 AND blocking_locks.tuple          IS NOT DISTINCT FROM blocked_locks.tuple
 AND blocking_locks.virtualxid     IS NOT DISTINCT FROM blocked_locks.virtualxid
 AND blocking_locks.transactionid  IS NOT DISTINCT FROM blocked_locks.transactionid
 AND blocking_locks.classid        IS NOT DISTINCT FROM blocked_locks.classid
 AND blocking_locks.objid          IS NOT DISTINCT FROM blocked_locks.objid
 AND blocking_locks.objsubid       IS NOT DISTINCT FROM blocked_locks.objsubid
 AND blocking_locks.pid           != blocked_locks.pid
JOIN pg_catalog.pg_stat_activity blocking_activity
  ON blocking_activity.pid = blocking_locks.pid
WHERE NOT blocked_locks.granted;

A few useful additions.

The modern shortcut: pg_blocking_pids. Since PostgreSQL 9.6 there’s a built-in function that does the same join in a single line — pg_blocking_pids(pid) returns an array of PIDs that are blocking the given PID. The whole “who’s blocking whom” query becomes:

1
2
3
4
5
6
7
8
SELECT a.pid                     AS blocked_pid,
       a.usename                 AS blocked_user,
       a.query                   AS blocked_statement,
       pg_blocking_pids(a.pid)   AS blocking_pids,
       a.wait_event_type,
       a.wait_event
FROM pg_stat_activity a
WHERE cardinality(pg_blocking_pids(a.pid)) > 0;

Shorter, easier to remember, and it handles fast-path locks correctly (the wiki query above can occasionally miss those). Use this on PG 9.6+ unless you specifically need the lock-type detail the wiki query produces.

How to read the result. blocked_pid is the session that’s stuck. blocking_pid is the one holding the lock. The catch: current_statement_in_blocking_process is whatever the blocker is running right now — often idle in transaction or some unrelated follow-up query, not the statement that originally took the lock. So if you see SELECT 1 in the blocker’s column, don’t be confused — the real cause is that this transaction has been holding locks for a while; the displayed query is just whatever it last ran. Look at state and xact_start in pg_stat_activity to see how long the transaction has been open.

Once you’ve found the blocker. Two functions, escalating in force:

1
2
3
4
5
-- Polite: ask the session to cancel its current query (transaction stays open)
SELECT pg_cancel_backend(12345);

-- Forceful: kill the entire backend connection (transaction rolls back)
SELECT pg_terminate_backend(12345);

pg_cancel_backend sends a SIGINT-like signal — the running statement is interrupted but the connection survives. pg_terminate_backend drops the whole connection. Try cancel first; reach for terminate when the session is wedged or holding locks while idle. Both require either superuser or membership in the pg_signal_backend role (PG 13+).

Don’t just kill — find out why. Killing the blocker stops the immediate pain, but if the same query keeps blocking everyone, the fix is upstream — a missing index, a transaction that stays open across a slow external call, an unbatched bulk update, or schema migrations during traffic. The query above shows you which session; the actual investigation usually lives in pg_stat_activity‘s state, xact_start, and application_name columns. 🔎

Posted in Database, PostgreSQL | Leave a comment

Check all table size in Postgres

You want a quick “which tables are eating my disk?” overview — usually before a vacuum, an archive purge, or a capacity-planning conversation. Here’s a one-shot query that joins information_schema.tables against pg_class to show every public table with its estimated row count and size:

1
2
3
4
5
6
7
8
SELECT ist.table_name,
       reltuples AS "entries",
       pg_size_pretty(relpages::BIGINT * 8 * 1024) AS SIZE
FROM information_schema.tables ist
INNER JOIN pg_class c ON (ist.table_name = c.relname)
WHERE ist.table_catalog = current_database()
  AND ist.table_schema  = 'public'
ORDER BY relpages DESC, ist.table_name ASC;

The columns: reltuples is the planner’s row-count estimate (updated by ANALYZE / autovacuum), and relpages * 8 KB is the table’s heap size — Postgres pages are 8 KB by default. Tables come out biggest-first.


A few useful additions.

relpages is heap-only — there’s a built-in for the full picture. The query above only counts the main table heap. It does not include indexes, TOAST tables (where wide values overflow), or the FSM/VM bookkeeping. For “how big is this table really, on disk?” use pg_total_relation_size:

1
2
3
4
5
6
7
8
9
10
11
SELECT n.nspname AS schema,
       c.relname AS TABLE,
       pg_size_pretty(pg_total_relation_size(c.oid)) AS total_size,
       pg_size_pretty(pg_relation_size(c.oid))       AS heap_size,
       pg_size_pretty(pg_indexes_size(c.oid))        AS indexes_size,
       c.reltuples::BIGINT                            AS estimated_rows
FROM pg_class c
JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind = 'r'
  AND n.nspname NOT IN ('pg_catalog', 'information_schema')
ORDER BY pg_total_relation_size(c.oid) DESC;

Three sizes side by side:

  • pg_relation_size — the main table heap only (what the original query approximates)
  • pg_indexes_size — every index on the table, summed
  • pg_total_relation_size — heap + indexes + TOAST + everything else attached to the relation

The gap between heap_size and total_size is often surprisingly large — a write-heavy table with several indexes can carry 3–5x its heap size in indexes. That’s where you start looking for indexes you don’t actually use (pg_stat_user_indexes with idx_scan = 0).

reltuples is an estimate, not a count. The number can be stale, and on a brand-new table it may even be -1 until ANALYZE runs. If you need the true row count, use SELECT count(*) FROM mytable — but that’s an exact scan, so reach for it only when you really need exact, on smaller tables. For a quick directional answer, reltuples is fine.

Skip the SQL entirely with \dt+ in psql. Inside psql, the meta-command \dt+ shows every table in the current schema along with its size:

1
psql -d mydb -c "\dt+"

Quick, readable, no joins to write. \dt+ schema.* for a specific schema; \di+ for indexes. 📐

Posted in Database, PostgreSQL | Leave a comment

Synchronize a postgres table through bash and csv

Please note that the csv export process does not escape commas.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
#!/bin/bash
DIR='/root/sql_dump'
mkdir -p "$DIR"
cd "$DIR"


SIZE=100000
#START=611244350
START=0
END=$((START + SIZE))
STOP=189097000
TABLNAME="schema.tablename"
while [[ $START -lt $STOP ]] && [[ $END -le $STOP ]]; do
    export PGPASSWORD="PASSWORD#1"
   
    echo "Read from SOURCE start $START end $END"
    FILE="${DIR}/dump${START}.csv"
    SQL="SELECT * FROM  ${TABLNAME} where id > $START and id <= $END"
    psql -U postgres -h SOURCEHOST -d DB1 -t -A -F"," -c "$SQL" > "$FILE"
   
    export PGPASSWORD="PASSWORD#1"
    SQL="DELETE FROM ${TABLNAME} where id > $START and id <= $END"
    psql -U postgres -h localhost -d DB2 -t -A -F"," -c "$SQL"
    psql -U postgres -h localhost -d DB2 -c "\copy ${TABLNAME} FROM '${FILE}' DELIMITER ',' CSV"
    echo "Injected to DESTINATION"
   
    START=$END
    END=$((START + SIZE))
    if [ $END -gt $STOP ]; then
        END=$STOP  
    fi
   
    rm $FILE

done
Posted in Database, PostgreSQL | Leave a comment

Export and Import Postgres query to CSV


# ===========================================================
# Export to CSV
# ===========================================================
# export PGPASSWORD="YOURPASSWORD"
# psql -U YOURUSERNAME -h YOURHOSTNAME -d YOURDBNAME-t -A -F"," -c "select * from YOURTABLENAME limit 2" > output.csv

# ===========================================================
# Import
# ===========================================================
# export PGPASSWORD="YOURPASSWORD"
# psql -U YOURUSERNAME -h YOURHOSTNAME -d YOURDBNAME -c "\copy YOURTABLENAME FROM '/tmp/dump.csv' DELIMITER ',' CSV"

Posted in Database, PostgreSQL | Leave a comment

Assorted postgres queries

Get table sizes:


SELECT
relname AS objectname,
relkind AS objecttype,
reltuples AS "#entries",
pg_size_pretty(relpages::bigint*8*1024) AS size
FROM pg_class
WHERE relpages >= 8

ORDER BY relpages DESC;

Posted in Database, PostgreSQL | Leave a comment