The Null Coalescing Operator: A Small PHP Feature That Quietly Changed Everything

If you’ve been writing PHP for a while, you probably remember the days of nested „isset()” checks cluttering up every template and controller. Since PHP 7, there’s a much cleaner way — and if you haven’t fully embraced it yet, it’s worth a second look.

The null coalescing operator (??) returns the left operand if it exists and isn’t null, otherwise the right. No warnings, no notices, no ceremony.

1
2
3
4
5
6
7
8
9
<?php
// The old way — verbose and easy to get wrong
$username = isset($_GET['user']) ? $_GET['user'] : 'guest';

// With null coalescing — same behavior, far less noise
$username = $_GET['user'] ?? 'guest';

// It chains too, which is where it really shines
$config = $userConfig['theme'] ?? $siteConfig['theme'] ?? 'default';

PHP 7.4 took it a step further with the null coalescing assignment operator (??=), which only assigns if the variable is currently null or unset:

1
2
3
4
5
6
7
8
9
<?php
$options = ['timeout' => 30];

// Only set 'retries' if it isn't already defined
$options['retries'] ??= 3;
$options['timeout'] ??= 60; // stays 30 — already set

print_r($options);
// Array ( [timeout] => 30 [retries] => 3 )

One subtle thing to keep in mind: ?? only reacts to null or unset — not to falsy values like „0″, „””, or „false”. That’s usually what you want, but it’s a meaningful difference from the older ?: (Elvis) operator, which falls back on any falsy value.

1
2
3
4
5
<?php
$count = 0;

echo $count ?? 10;  // prints 0 — because 0 is not null
echo $count ?: 10;  // prints 10 — because 0 is falsy

Small syntax, big quality-of-life improvement. If your codebase still has rows of „isset()” ternaries, refactoring them is one of those low-risk cleanups that pays off every time someone reads the file next. 🐘

Posted in php | Tagged | Leave a comment

Did You Know? Python’s Walrus Operator Can Make Your Code Cleaner

Did you know? Since Python 3.8, you can use the walrus operator ( := ) to assign a value to a variable as part of an expression. It’s a small piece of syntax that can meaningfully tidy up loops and comprehensions where you’d otherwise compute the same value twice.

Here’s a classic example — reading lines from a file until you hit an empty line:

1
2
3
4
5
6
7
8
9
10
11
# Without the walrus operator
with open("data.txt") as f:
line = f.readline()
while line:
print(line.strip())
line = f.readline()

# With the walrus operator — assign and test in one step
with open("data.txt") as f:
while (line := f.readline()):
print(line.strip())

It’s also handy in list comprehensions when you want to filter on a computed value without recomputing it:

1
2
3
4
5
6
7
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

# Keep only squares greater than 20, without squaring twice
big_squares = [sq for n in numbers if (sq := n * n) &gt; 20]

print(big_squares)
# [25, 36, 49, 64, 81, 100]

A word of caution: the walrus operator is powerful but easy to overuse. Reach for it when it genuinely removes duplication or makes intent clearer — not just because it’s clever. 🐍

Posted in Python | Tagged | Leave a comment

Did You Know? Python Dictionaries Preserve Insertion Order

Did you know? Since Python 3.7, the built-in

1
dict

type officially preserves the order in which keys are inserted. Before that, if you needed ordering guarantees you had to reach for

1
collections.OrderedDict

. Today, a plain dictionary is enough for most cases.

Here’s a small demonstration:

1
2
3
4
5
6
7
8
9
10
11
12
13
# Keys stay in the order they were added
user = {}
user["name"] = "Ada"
user["role"] = "Author"
user["joined"] = 2026

for key, value in user.items():
    print(f"{key}: {value}")

# Output:
# name: Ada
# role: Author
# joined: 2026

This also means dictionary comprehensions and merges keep a predictable order, which is surprisingly useful when serializing to JSON or building config objects:

1
2
3
4
5
6
7
defaults = {"host": "localhost", "port": 8080}
overrides = {"port": 9090, "debug": True}

# Merge with the | operator (Python 3.9+)
config = defaults | overrides
print(config)
# {'host': 'localhost', 'port': 9090, 'debug': True}

One caveat: ordering is a property of the dictionary, not of equality. Two dicts with the same keys and values are considered equal even if their insertion order differs. 🐍

Posted in Python | Tagged | Leave a comment

BOLA in a Laravel Livewire app: when client-side state is the only thing standing between users and admin actions

A penetration test landed an interesting finding on a Livewire-powered admin panel I work on. The summary on the report read: Broken Object-Level Authorization (BOLA). A standard user can change a tenant-wide “who can access these assets” setting by replaying an administrator’s Livewire request. Severity: Low. Impact: High.

That gap between severity and impact is what made the finding interesting. “Low” because exploitation requires capturing a snapshot from an admin’s session — non-trivial. “High” because the moment you have one, a regular user becomes effectively an administrator. 🪓

What the tester actually did

Two browsers, side by side.

Browser A: logged in as a tenant administrator. Open the asset access settings page, flip the toggle, click Save. While the request is in flight, capture the Livewire snapshot — the JSON blob Livewire posts to /livewire/update containing the component class, the new value, and the cryptographically-signed snapshot of component state. This is normal browser-DevTools work.

Browser B: log out of the admin session. Log in as a plain unprivileged user. Replay the captured request from Browser A, with Browser B’s session cookie. The server processes it. The toggle flips. The standard user has just changed a tenant-wide setting.

The Livewire snapshot’s signature checks out — the snapshot itself is valid. What it’s missing is any check that the user submitting the request is actually allowed to perform the action it represents.

Why this happens in Livewire specifically

If you’ve built REST controllers in Laravel, you’ve reflexively put authorization at the top of your action methods:

1
2
3
4
5
public function update(Request $request, Asset $asset)
{
    $this->authorize('update', $asset);
    // ...
}

Livewire components don’t pattern-match this in your head the same way. The methods you write in a Livewire class — save(), delete(), toggleAccess() — feel like internal helpers. They’re public methods on a PHP object, not endpoints. But Livewire makes them exactly that: every public method is reachable from the browser via a signed snapshot replay. If you don’t authorize them server-side, nothing else does. Blade conditionals that hide UI elements only hide UI elements. The endpoint is open.

The mental shift: every public method on a Livewire component is a controller action, and deserves the same authorization treatment. 🛡️

The fix pattern

I went through every Livewire component in the project and applied the same three-step pattern.

1. Authorize in mount() for the whole component

If a component shouldn’t even be rendered for unauthorized users, fail fast in mount(). This handles the “don’t load it” half of the problem and short-circuits replay attacks against the form itself:

1
2
3
4
5
6
public function mount($context)
{
    $this->authorize('asset:list');
    $this->context = $context;
    // ...
}

2. Authorize on every action method

For each public method that mutates state — save, update, delete, toggleSomething — add an authorize call at the top. Don’t trust that mount() already gated the component, because a replay attack hits the action method directly without re-running mount():

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
public function deleteUser(): void
{
    $this->authorize('user:delete');
    // ... actual deletion
}

public function validateAndSaveUser(): void
{
    if ($this->context === 'createUser') {
        $this->authorize('user:create');
        // ...
    } else {
        $this->authorize('user:edit');
        // ...
    }
}

Note the pattern in the second example: the same component handles two different operations (create and edit) with different ability strings. The authorization check goes inside each branch, so the right ability is enforced for each.

3. Use a base class so it’s the default, not the exception

Across a few dozen components, it’s easy to miss one. We introduced a thin base class that all our Livewire components extend, which trait-includes a customized authorize():

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
namespace App\Auth;

use Illuminate\Foundation\Auth\Access\AuthorizesRequests as BaseAuthorizesRequests;

trait LivewireAuthorizesRequests
{
    use BaseAuthorizesRequests {
        authorize as baseAuthorize;
    }

    public function authorize($ability, $arguments = [])
    {
        return auth()?->user()?->canAccess($ability)
            || $this->baseAuthorize($ability, $arguments);
    }
}

Two small things going on. First, we alias Laravel’s stock authorize to baseAuthorize so we can fall through to it. Second, our app has a custom canAccess on the user that consults a role-ability map living in config/roles.php. The trait gives Livewire components both checks — our app’s role abilities, and stock Laravel policies — with one consistent call site.

4. The harder case: object-scoped authorization

Some abilities are global (“can this user create assets at all?”). Others are per-object (“can this user edit this specific campaign?”). The second one is closer to OWASP’s actual definition of BOLA — the object-level part. We added a sibling helper:

1
2
3
4
5
6
public function authorizeGroupedObject($ability, $groupedObject, $arguments = [])
{
    return (auth()?->user()?->canAccess($ability)
        && $groupedObject?->isAdminAuthorized(auth()->user()))
        || $this->baseAuthorize($ability, $arguments);
}

Used like:

1
$this->authorizeGroupedObject('campaign:edit', $this->campaign);

Both conditions must hold: the user has the role-level ability, and the user has access to the specific group/tenant/owner that this object belongs to. Without the second check, a user who has “campaign:edit” globally could replay a snapshot to edit a campaign in someone else’s group — exactly the BOLA pattern, just with the object identifier in the snapshot instead of the action.

Tests for replay attacks specifically

The most useful thing I added wasn’t the fix — it was a test file that simulates the exact attack. Roughly:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
public function test_standard_user_cannot_replay_admin_asset_toggle()
{
    $admin = $this->createTenantAdmin();
    $standardUser = $this->createStandardUser();

    // Standard user calls the action directly, as a replay would.
    $this->actingAs($standardUser);
    Livewire::test(AssetsAccess::class, ['context' => 'settings'])
        ->call('save', true)
        ->assertForbidden();

    // And the underlying setting is still the original value.
    $this->assertFalse(Setting::firstWhere('key', 'can_access_assets')->value);
}

The point of this test isn’t that the UI hides the button from the user — that’s not what’s being verified. The point is the action method itself, when called by an unauthorized actor, refuses. That’s the only assertion that catches a replay attack.

I wrote one of these for every component I touched. Feature tests, not unit tests, and Livewire’s Livewire::test() harness makes them concise.

Lessons

  • A signed snapshot is not an authorization check. Livewire’s signature proves the snapshot wasn’t tampered with. It does not prove the current user is allowed to use it. These are different properties; the framework provides the first, you provide the second.
  • Every public method on a Livewire component is a public endpoint. Reason about it the same way you would a controller action. “This is only called from my own Blade view” is wrong — it’s called by anyone who can construct a request to /livewire/update.
  • Hiding UI is not enough. A @can directive in Blade hides a button. It does not protect the action behind the button. Both are needed; only the second one is security.
  • Bake authorization into the base class. If “add $this->authorize(…) to every public method” is a convention, you’ll forget. If authorize is a trait method on the base class and there’s a code review checklist, you’ll forget less. If you go a step further and write a static analyzer that flags Livewire action methods with no authorize() call, you’ll forget least.
  • Test replays directly. Don’t only test the happy path “admin can do thing” and the sad path “button doesn’t show for non-admin.” Also test “non-admin calls the action method directly and is rejected.” That’s the test that maps to the actual attack.

The pentest report rated this Low severity because the attacker needs an admin’s snapshot. In practice, the gap between “can capture an admin’s snapshot” and “is an admin” is whatever the local network conditions are — a shared workstation, a malicious browser extension, a screenshare gone wrong. Do not rely on that gap. Authorize on the server, on every action, every time. 🔐

Posted in Laravel, php | Tagged , , , , , , | Leave a comment

Laravel Sail: a developer’s cheat sheet 🐳

Laravel ships with Sail — a thin command-line wrapper around docker compose that gives you the whole Laravel toolchain (PHP, MySQL, Redis, Mailpit, Node) in containers, without you needing to install any of them on your host. The only thing you need on the laptop is Docker. Everything else lives in containers and goes away when you delete the project.

This is the quick-reference I keep open in another tab while building Laravel apps on macOS. 🍎

What you actually need on the host

  • macOS (these notes target Apple Silicon and Intel Macs equally)
  • Docker Desktop — the only hard prerequisite. Sail uses it for everything else (PHP, Composer, Node, MySQL, Redis).
  • That’s it. You don’t need PHP installed locally. You don’t need Composer locally. You don’t need Node locally. You install them once via Sail’s bootstrap and from then on every command runs inside containers.

Spin up a fresh project (with MySQL and Redis)

The official one-liner uses Laravel’s builder image to scaffold a new app and pre-select the services you want. Tell it mysql and redis in the with query parameter:

1
2
3
curl -s "https://laravel.build/example-app?with=mysql,redis" | bash
cd example-app
./vendor/bin/sail up -d

That brings up four containers — your app, MySQL, Redis, and Mailpit (the dev mail-catcher) — and exposes the app on http://localhost. The first run pulls images and takes a couple of minutes; subsequent sail up calls are fast.

Tip: alias sail so you don’t have to type the long path every time.

1
alias sail='[ -f sail ] && sh sail || sh vendor/bin/sail'

Drop that into your ~/.zshrc and you can just type sail up -d, sail artisan …, etc., from anywhere inside a Sail project.

The Artisan commands you’ll reach for daily

Anything you’d run as php artisan … on a non-Sail setup, you run as sail artisan …. Sail just shells into the app container and forwards the command. The most common ones:

1
2
3
4
5
6
sail artisan tinker                      # interactive REPL with your app booted
sail artisan route:list                  # show every registered route
sail artisan migrate                     # run pending migrations
sail artisan make:controller UserController
sail artisan make:model Department -m    # model + migration in one shot
sail artisan queue:work                  # start a worker against the default queue

tinker is the standout feature you’ll likely use most — it’s a Laravel-aware PHP REPL with every facade, every model, and your full config() ready to go. Need to check what User::find(1)->roles returns? sail artisan tinker, type the expression, get an answer. Beats writing a controller-and-route just to peek at data.

Mailpit — see every email your app sends

Sail bundles Mailpit, a friendly local SMTP server with a web UI. Any mail your app tries to send (password resets, notifications, queued emails) gets caught and shown at:

1
http://localhost:8025

No SMTP credentials, no real provider, no actual emails leaving your machine. Just open the inbox and see what your app sent. The .env Sail generates already wires MAIL_MAILER=smtp, MAIL_HOST=mailpit, MAIL_PORT=1025, so it works on first run.

Database workflow: migrate, seed, refresh

The mental model: migrations describe schema changes, seeders insert sample data, and there’s a small family of commands for moving between states while you’re iterating on a feature.

1
2
3
4
5
6
7
8
9
# Wipe the database, re-run every migration from scratch, then run seeders
sail artisan migrate:refresh --seed

# Create a new migration file in database/migrations/
sail artisan make:migration create_departments_table

# Roll back the last batch (or the last N batches) and re-apply forward —
# the fastest way to iterate on a brand-new migration you're still tweaking
sail artisan migrate:rollback --step=1 && sail artisan migrate

The third one is the workhorse for daily development: edit the migration, roll it back one step, run forward, repeat. migrate:refresh –seed is heavier — it nukes everything and re-applies, so save it for when you’ve made many changes and want a clean slate.

Installing dependencies

Composer (PHP) and npm (frontend) both run inside the Sail container. The full “I just pulled a fresh branch” sequence:

1
sail composer install && sail npm install && sail npm run dev

sail npm run dev starts Vite in dev mode for hot reloading. For a production-style build, use sail npm run build and serve the compiled assets.

Routes and pages

The flow for a new page is short. Define a route, point it at a controller method, render a Blade view.

1
2
3
4
5
// routes/web.php
use App\Http\Controllers\DashboardController;

Route::get('/dashboard', [DashboardController::class, 'index'])
    ->name('dashboard');
1
sail artisan make:controller DashboardController
1
2
3
4
5
// app/Http/Controllers/DashboardController.php
public function index()
{
    return view('dashboard', ['user' => auth()->user()]);
}

Then check what’s wired by listing every registered route:

1
sail artisan route:list

Add –except-vendor to hide the Laravel default routes and see only yours; –name=dashboard filters to a single route by name.

Getting a shell inside a container

Sometimes you need to poke around inside a container — inspect a config file, run a one-off mysql command, check redis state. Sail has shortcuts:

1
2
3
sail shell        # bash inside the app container (root — be careful)
sail mysql        # mysql client connected to the dev database
sail redis        # redis-cli connected to the local redis

Under the hood these are just docker exec calls. The equivalents:

1
2
3
docker exec -it example-app-laravel.test-1 bash    # what 'sail shell' does
docker exec -it example-app-mysql-1 bash           # what 'sail mysql shell' does
docker exec -it example-app-redis-1 sh             # what 'sail redis shell' does

The container names are <project-name>-<service-name>-1, so substitute your project’s directory name for example-app. sail shell drops you in as root in the app container — that’s deliberate (Sail’s container is a development sandbox), but it does mean you can break things by being careless. Treat it like an SSH session into a dev box.

Tests

Laravel uses PHPUnit under the hood (with Pest as a popular alternative). Sail makes the runner one command:

1
2
3
4
5
6
7
8
# Generate a unit test stub
sail artisan make:test UserTest --unit

# Run the whole suite
sail artisan test

# Run with HTML coverage (output goes to ./coverage)
sail artisan test --coverage-html coverage

–unit creates the test under tests/Unit/ (no Laravel app boot, fastest to run). Without it, you get a feature test under tests/Feature/ which boots the application and gives you the full HTTP-style helpers ($this->get(‘/dashboard’)->assertOk()). Use Unit for pure logic, Feature for anything touching routes, models, or services.

The –coverage-html flag requires Xdebug or PCOV in the container. Sail’s image ships PCOV, so this works out of the box on a default Sail setup.

When things misbehave: the cleanup checklist

Laravel caches a lot — config, routes, views, compiled service container. After bigger changes (especially editing config/*.php or env vars), the caches can lie to you. The reset:

1
2
3
4
sail artisan cache:clear
sail artisan config:clear
sail artisan route:clear
sail artisan view:clear

And of course, the first place to look when something is broken is the application log. Tail it in a separate terminal while you reproduce the bug:

1
tail -f storage/logs/laravel.log

Stack traces, query logs, anything you’ve Log::info()‘d — it all ends up here. If your app is logging to a different channel (configured in config/logging.php), check there instead.

The day-to-day shape

Once you’ve used Sail for a project or two, the daily loop becomes muscle memory: sail up -d in the morning, sail artisan commands as you build, sail artisan test before pushing, sail down when you switch projects. Nothing leaks onto the host, every project’s PHP/MySQL/Redis versions stay independent, and onboarding a new teammate is “install Docker, clone the repo, ./vendor/bin/sail up“.

For most Laravel work I do these days, I never type php directly anymore. ⛵

Posted in Web Development | Tagged , , , | Leave a comment

List open or listening ports

You started a service, you can’t tell whether it actually bound to its port, and you want to see what’s listening — or you want to find out which process is squatting on port 8080. Two one-liners, two operating systems:

macOS

1
lsof -nP -i4TCP

RedHat / CentOS 7

1
netstat -tulpn

What the flags do: lsof -nP turns off DNS and port-name resolution (so you see 192.168.1.5:443 instead of app-server.local:https — faster and unambiguous). -i4TCP filters to IPv4 TCP sockets. For netstat -tulpn: t = TCP, u = UDP, l = listening only, p = show the PID/process, n = numeric (no DNS).


A few useful additions.

On modern Linux, prefer ss over netstat. The net-tools package that ships netstat is largely deprecated — most distros have moved to iproute2‘s ss (socket statistics). It’s faster on busy machines (reads from netlink instead of /proc) and uses the same flags you already know:

1
ss -tulpn

If you’ve been muscle-memory-typing netstat for years, the migration is one character. Same flags, same shape, modern implementation.

Listening-only on macOS. lsof -i4TCP shows every TCP connection — listeners and established. To narrow to just the things accepting new connections, add -sTCP:LISTEN:

1
2
3
4
5
# All listening TCP sockets (IPv4 + IPv6)
lsof -nP -iTCP -sTCP:LISTEN

# Add UDP for the full picture
lsof -nP -iUDP

The question you actually want answered: “what’s on port 8080?” Three flavours of the same question:

1
2
3
4
5
6
7
8
# macOS / Linux
lsof -i :8080

# Linux (modern)
ss -tulpn | grep :8080

# Linux (also handy — kill-by-port)
sudo fuser -k 8080/tcp

The last one is the nuclear option: fuser -k kills whoever has the port. Useful when a stale process is holding it and you don’t care about graceful shutdown.

Run it as root if you want to see other users’ processes. Without sudo, lsof, netstat -p, and ss -p only show process names for processes you own. If you see a port listed as LISTEN but the PID column is blank, that’s the symptom — re-run with sudo and the owner pops out.

Windows. The closest equivalent on Windows is netstat -ano from cmd (the -o shows the PID; cross-reference in Task Manager or with tasklist /fi “PID eq 1234”). PowerShell users get something nicer — Get-NetTCPConnection returns proper objects you can pipe and filter:

1
Get-NetTCPConnection -State Listen | Select-Object LocalAddress, LocalPort, OwningProcess

Pair that with Get-Process -Id $pid to translate OwningProcess back to a process name. 🔌

Posted in Bash, Operating System | Leave a comment

MongoDB Notes

If you’re storing binary files inside MongoDB, the convention is called GridFS. It splits each logical file into two collections: a metadata document and a sequence of binary chunks. This post is a cheat sheet for inspecting and tweaking those documents from the Mongo shell. 🍃

When using MongoDB to store files, we have two collections:

  1. The place where MongoDB stores the file metadata: store.files
  2. And the place where MongoDB stores the file content: store.chunks

Depending on the size of the file, one entry in store.files can point to many entries in store.chunks. The bigger the file, the more entries you’ll encounter.

1
2
3
4
5
6
7
8
// Show all / list all entries from store.files
db.getCollection('store.files').find({});

// Show only a particular entry from store.files
db.getCollection('store.files').find({ _id: ObjectId("5b02d232cbce1d07e08401c7") });

// The same can be used for store.chunks.
db.getCollection('store.chunks').find({});

The metadata fields in store.files can be augmented at query time (the new field exists only in the result, not in the database):

1
2
3
4
db.getCollection('store.files').aggregate([
    { $match: { _id: ObjectId("5b02d232cbce1d07e08401c7") } },
    { $addFields: { 'key_reference': '1234' } }
]);

Or we can do an update on store.files, which actually persists the new field into the database:

1
2
3
4
db.getCollection('store.files').updateMany(
    { _id: ObjectId("5b02d232cbce1d07e08401c7") },
    { $set: { 'key_reference': '1234' } }
);

A few useful additions.

Why files are split into chunks. MongoDB’s per-document hard limit is 16 MB. GridFS works around that by splitting any file larger than the chunk size into many small chunk documents and writing one metadata doc that links them together. The default chunk size is 255 KB, configurable per bucket. So a 10 MB upload becomes one *.files doc and roughly 40 *.chunks docs, all sharing the same files_id. To inspect that relationship for a specific file:

1
2
3
db.getCollection('store.chunks')
    .find({ files_id: ObjectId("5b02d232cbce1d07e08401c7") })
    .sort({ n: 1 });   // n is the chunk index, 0..N-1

The bucket name store.* is custom. The default GridFS bucket is named fs, so out of the box you’d see fs.files and fs.chunks. The bucket name is whatever the application set when it opened the GridFS handle. If your app uses store, replace fs with store in any docs example you find online.

Putting and getting files in the first place. The shell snippets above are for inspecting files that are already there — they don’t help you upload or download the binary content. For that, use the mongofiles CLI or the driver-level GridFS API:

1
2
3
4
5
6
7
8
# Upload
mongofiles --uri "mongodb://localhost/mydb" --prefix store put /path/to/file.pdf

# Download
mongofiles --uri "mongodb://localhost/mydb" --prefix store get file.pdf

# List
mongofiles --uri "mongodb://localhost/mydb" --prefix store list

From application code, every official driver has a GridFS class — GridFSBucket in Node and Java, GridFS in PyMongo, IGridFSBucket in C#. They handle the chunking and reassembly for you.

Don’t delete files by hand. A common pitfall: deleting a row from store.files directly leaves the matching chunks orphaned in store.chunks, slowly bloating the collection. Either use mongofiles delete <filename>, or your driver’s GridFSBucket.delete(fileId), both of which remove the metadata and the chunks atomically.

Should you actually use GridFS? A practical heads-up: if your files are bigger than 16 MB and you already use MongoDB, GridFS is a reasonable fit and keeps backups simple. But for most modern stacks, putting the bytes in object storage (S3, GCS, MinIO, R2) and keeping only a URL or key in MongoDB is cheaper, faster, and easier to scale. GridFS is most defensible when you genuinely want files transactionally co-located with the database — e.g. mobile/embedded scenarios, or when network egress to S3 is a non-starter. 💡

Posted in Database | Leave a comment

CentOS 6 repo Settings

To fix repo settings in CentOS 6

1. make sure there is no proxy or funny settings in
vi /etc/yum.conf

2. There are a couple of files within /etc/yum.repos.d/. Make sure the url are correct (accessible) and enabled=1
ll /etc/yum.repos.d/

3. Cleanup the repo, list and retest
yum –enablerepo=base clean metadata;
yum repolist all
yum search java-1.8.0-openjdk

Posted in Linux | Leave a comment

Show Linux Partition Tree Mountpoint and If SSD

1
lsblk -o TYPE,NAME,KNAME,UUID,MOUNTPOINT,SIZE,ROTA
Posted in Linux | Leave a comment

Setting log4j log level programmatically

Sometimes you don’t want to ship a log4j.properties file — you want to spin up logging in code. Useful inside unit tests, one-off debug runs, or anywhere you want to flip log levels at runtime. Here’s a self-contained setupLog4j() that wipes any existing config, installs a console appender with a pattern, sets the root level to DEBUG, and binds a logger for your class.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
import org.apache.log4j.BasicConfigurator;
import org.apache.log4j.ConsoleAppender;
import org.apache.log4j.Level;
import org.apache.log4j.Logger;
import org.apache.log4j.PatternLayout;

private static void setupLog4j() {
        System.out.println("setupLog4j");
        BasicConfigurator.resetConfiguration();
        // Start clean.
        Logger.getRootLogger().removeAllAppenders();
        // Create appender
        ConsoleAppender console = new ConsoleAppender();
        // Configure the appender
        String PATTERN = "%d --[ %p ] %l: %m%n";
        console.setLayout(new PatternLayout(PATTERN));
        console.activateOptions();
        console.setName("stdout");
        Logger.getRootLogger().setLevel(Level.DEBUG);
        BasicConfigurator.configure(console);
        LOG = Logger.getLogger(MinerTest.class);
}

Replace MinerTest.class with your own class — it’s just the logger name (Log4j conventionally uses the fully-qualified class name so output stays organised by package).


A few useful additions.

This is Log4j 1.x — and Log4j 1.x is end-of-life. The package above is org.apache.log4j. Log4j 1.x reached EOL in August 2015 and has unpatched CVEs against it. If you’re starting something new, use Log4j 2 (org.apache.logging.log4j) or SLF4J with Logback. Keep this snippet around as a recipe for legacy projects, but don’t pick Log4j 1.x for anything fresh.

Set the level for one package, not the whole app. Logger.getRootLogger().setLevel(Level.DEBUG) turns DEBUG on globally — that floods everything, including third-party libraries. Usually you only want DEBUG for your own code:

1
2
Logger.getLogger("com.acme.miner").setLevel(Level.DEBUG);
Logger.getLogger("org.springframework").setLevel(Level.WARN); // tame the framework

Same idea in Log4j 2. The API is different — there’s no BasicConfigurator; instead you talk to the LoggerContext / Configurator:

1
2
3
4
5
6
7
8
import org.apache.logging.log4j.Level;
import org.apache.logging.log4j.core.config.Configurator;

// Set the level for one package at runtime:
Configurator.setLevel("com.acme.miner", Level.DEBUG);

// Or for the root logger:
Configurator.setRootLevel(Level.DEBUG);

For the full “build a config from scratch” equivalent, see Log4j 2’s ConfigurationBuilder — the API is more verbose but lets you compose appenders, layouts, and loggers programmatically.

Using SLF4J / Logback? If your codebase logs via org.slf4j.Logger with Logback under the hood (very common), you flip levels through Logback’s own classes — SLF4J itself has no level-setting API:

1
2
3
4
5
6
import ch.qos.logback.classic.Level;
import ch.qos.logback.classic.Logger;
import org.slf4j.LoggerFactory;

Logger logger = (Logger) LoggerFactory.getLogger("com.acme.miner");
logger.setLevel(Level.DEBUG);

The cast from org.slf4j.Logger to ch.qos.logback.classic.Logger is the giveaway — SLF4J is just a facade; the level lives on the implementation. 🪵

Posted in java | Leave a comment