For a long time, the best option was a library called Hashids, which also has a Laravel bridge package created by Vincent Klaiber. Recently, however the Hashids project announced the launch of a new tool called [Sqids)(https://sqids.org); the spiritual successor to Hashids. This is more than just a rebranding: the underlying algorithm has been simplified and is now Identical across all platforms. A PHP implementation was recently released, created by Ivan Akimov and Vincent Klaiber.
Let's take a look at how we can integrate Sqids with a Laravel application. Our goal will be to allow this integration to be completely transparent with our usage of the framework.
NB: Sqids can be decoded by third parties so they are not good for truly sensitive information. If that is a concern for your application you should consider an alternative stragey.
First we will pull in the sqids/sqids-php
package with Composer:
composer require sqids/sqids
NB: sqids/sqids-php
requires at least PHP 8.1 and either ext-bcmath or ext-gmp; it has no other dependencies.
Now we will set up a utility class to handle encoding and decoding sqids:
<?php
namespace App\Utility;
use Sqids\Sqids;
final class Sqid
{
/**
* Encode an integer as a hashid.
*/
public static function encode(?int $number): string
{
if (is_null($number)) {
return '';
}
return resolve(Sqids::class)->encode([$number]);
}
/**
* Decode a sqid string into an integer.
*/
public static function decode(string $sqid): int
{
return resolve(Sqids::class)->decode($sqid)[0];
}
}
Squids is built to handle arrays; it accepts an array of integers for encoding and returns an array of integers when decoding. In our case we are only interested in single integer values, not arrays, so the utility methods above handles packaging and unpacking the arrays for us. The resolve()
method is a Laravel tool for fetching classes from the service container.
One nice benefit of Sqids over Hashids is that there is no need for a salt value to randomize the output. It works out of the box without any additional configuration, which may suit your needs just fine. In my case, however, I would like to ensure that all of the generated sqid strings have the same minimum length for the lifetime of my application. This can be achieved by passing in a minLength
value to the constructor:
(new Sqids(minLength: 6))->encode([1]);
// 'UkLWZg'
NB: This is a minimum length, not a specific length.
Additionally, the default "alphabet" of characters used for generating sqid strings contains uppercase and lowercase letters together. I would prefer to use only lowercase letters so I don't have to be concerned about handling case-sensitive URLs.
To achieve these goals we will prepare an instance of the Sqids class that is configured to our liking and bind it to the service container as a singleton so we can resolve it wherever we need it.
Let's create a new service provider to handle our Sqids configuration:
php artisan make:provider SqidsServiceProvider
This will create a new app/Providers/SqidsServiceProvider
class. We will use the register
method to handle our configuration:
<?php
namespace App\Providers;
use Illuminate\Support\ServiceProvider;
use Sqids\Sqids;
class SqidsServiceProvider extends ServiceProvider
{
const PAD = 7;
const ALPHABET = 'msd793zjyw5rf8v6qxahpgn1bk0etc4u2';
/**
* Register services.
*/
public function register(): void
{
$this->app->singleton(Sqids::class, function () {
return new Sqids(self::ALPHABET, self::PAD);
});
}
}
In addition to shortening the alphabet I have also shuffled the order of the letters. This is the recommended method for generating sqid values that are unique to your application. I have also removed the letters 'l', 'i' and 'o' because they look similar to '1' and '0', though you can craft any alphabet you would like.
Make sure you register your provider in your config/app.php
file. With this singleton bound to the service container we are guaranteed to get our configured version of the Squids class any time we resolve it through the container:
$sqid = resolve(Sqids::class)->encode([$number]);
Rather than having to encode a model Id every time we need to reference a sqid, we can use a model accessor to compute sqids automatically. We will set this up as a trait that can be applied to any model that want to reference with sqids:
<?php
namespace App\Models;
use App\Utility\Sqid;
use Illuminate\Database\Eloquent\Casts\Attribute;
trait HasSqid
{
/**
* Get the obfuscated version of the model Id.
*
* @see https://sqids.org
*/
protected function sqid(): Attribute
{
return Attribute::make(
get: fn () => Sqid::encode($this->id)
);
}
}
NB: I have put this trait in the App\Models
namespace but you can put it anywhere you want, just make sure to adjust line 3.
Now when we call $model->sqid
we will get the appropriate sqid version of the model Id based on our configuration specs from earlier.
The last piece of the puzzle is route model binding. Can we get Laravel to translate sqids into models for us automatically when handling routes? Let's give it a shot.
We will add two additional methods to our HasSqid
trait:
/**
* Get the route key for the model.
*
* @return string
*/
public function getRouteKeyName()
{
return 'sqid';
}
This method tells Laravel to use the 'sqid' accessor when generating model routes instead of 'id':
route('some.named.route', $sqidModel);
// https://myapp.com/some/named/route/abc1234
Next, we will intercept the route model binding handler to decode the sqid before querying the database:
/**
* Retrieve the model for a bound value.
*
* @param mixed $value
* @param string|null $field
* @return \Illuminate\Database\Eloquent\Model|null
*/
public function resolveRouteBinding($value, $field = null)
{
return $this->resolveRouteBindingQuery($this, Sqid::decode($value), 'id')->first();
}
The "value" passed into resolveRouteBinding
method is the sqid string itself. The "field" is not necessary for our purposes but it relates to optional key customization. In our version of the resolveRouteBinding
method we are telling Eloquent to find the first database record that has an Id that matches our decoded sqid value.
With those two methods in place we can now add this trait to any model we want and automatically enable the use of sqids for that model and Laravel does all of the heaving lifting for us automatically. Perfect!
Having been a big fan of Hashids for several years now it was delightful to see this new simplified and upgraded version released recently. Ivan and Vincent have done an excellent job on this PHP implementation.
]]>The key to forcing browsers to update their asset cache is to change the name of the file that is being referenced. Laravel Mix does this by implementing asset fingerprinting; a unique file name is generated for each asset each time you run your mix scripts (in production) and the mix()
helper method looks up the appropriate name via a mix-manifest.json
file that is stored in your public path.
We can accomplish something similar by appending a query string parameter to our asset URLs. To do that we will set up a config value to keep track of our asset version and then use that value as a query parameter when calling our asset files.
To implement this we will first create an assets.php
config file in our config/
directory. This config file will have one value, called "version":
<?php
return [
'version' => env('ASSETS_VERSION', null),
];
Notice that we are referencing a newASSETS_VERSION
environment variable. Add this key to your .env
file. We will set up an artisan command to update the value of the ASSETS_VERSION
variable for us when needed. We can crib the functionality for this tool from the key:generate
command, which performs a very similar task.
<?php
namespace App\Console\Commands;
use App\Utilities\Str;
use Illuminate\Console\Command;
use Illuminate\Console\ConfirmableTrait;
class AssetVersioningCommand extends Command
{
use ConfirmableTrait;
/**
* The name and signature of the console command.
*
* @var string
*/
protected $signature = 'assets:version {--force : Force the operation to run when in production}';
/**
* The console command description.
*
* @var string
*/
protected $description = 'Generate an asset version identifier';
/**
* Execute the console command.
*
* @return int
*/
public function handle()
{
$key = $this->generateRandomKey();
if (! $this->setKeyInEnvironmentFile($key)) {
return;
}
$this->info('Asset version key set successfully.');
}
// ...
}
The generateRandomKey()
method will be very simple:
protected function generateRandomKey(): string
{
return Str::random(16);
}
The setKeyInEnvironmentFile()
method is borrowed directly from the key:generate
command with some slight modification:
protected function setKeyInEnvironmentFile($key)
{
$currentKey = $this->laravel['config']['assets.version'];
if (strlen($currentKey) !== 0 && (!$this->confirmToProceed())) {
return false;
}
$this->writeNewEnvironmentFileWith($key);
return true;
}
The writeNewEnvironmentFileWith()
method is also borrowed directly from the key:generate
command:
protected function writeNewEnvironmentFileWith($key)
{
file_put_contents($this->laravel->environmentFilePath(), preg_replace(
$this->keyReplacementPattern(),
'ASSETS_VERSION=' . $key,
file_get_contents($this->laravel->environmentFilePath())
));
}
Finally, this is the keyReplacementPattern()
method, again borrowed from the key:generate
command:
protected function keyReplacementPattern()
{
$escaped = preg_quote('=' . $this->laravel['config']['assets.version'], '/');
return "/^ASSETS_VERSION{$escaped}/m";
}
You can see the complete file here. With this command in place, we can generate a new asset version identifier upon each new deployment by adding a call to the assets:version
command to our deployment script.
Now that we have the asset version ID available to us, we need to update our asset urls to include the ID as a query parameter. We can do that with a custom blade directive. Add this to the boot()
method of your AppServiceProvider
:
Blade::directive('version', function($path) {
return "<?php echo config('assets.version') ? asset({$path}) . '?v=' . config('assets.version') : asset({$path}); ?>";
});
This blade directive accepts a partial path to an asset. It uses the asset()
url helper to generate a full URL to the asset, then appends our version ID to the url as a query string. If no version is found the query parameter will be omitted.
We will now need to update our layout files to use this directive when loading assets:
<script src="@version('/js/app.js')"></script>
We can now reference versioned assets anywhere we need to in our application, and we can ensure that browsers will always pull in the latest version of our assets when we deploy updates.
]]>Let's take a look at an alternative asset pipeline configuration that has much less overhead and works particularly well in the TALL stack context.
TLDR? See the final package.json
definition here.
esbuild is a javascript bundler written in golang. It has a very small footprint, and it compiles javascript very quickly. It can be installed with NPM:
npm install esbuild --save-dev
To compile javascript we pass it our main javascript file using flags to indicate the type of output we want to create. For example, to build our javascript for local development we might do something like this:
./node_modules/.bin/esbuild resources/js/app.js --outfile=public/js/app.js --target=es6 --bundle --sourcemap
A production build might look something like this:
./node_modules/.bin/esbuild resources/js/app.js --outfile=public/js/app.js --target=es6 --bundle --minify --define:process.env.NODE_ENV=\\\"production\\\"
The first argument is our javascript entry point; the file that serves as the orchestrator of our javascript bundle. In that file you will import any third party packages you are using, as well as any custom javascript you have written for your application. In most Laravel applications this is the resources/js/app.js
file, but it may be different in your project.
The --outfile
argument tells esbuild where you want the compiled javascript to be written. In this case we are sending it to public/js/app.js
. As a side note, I recommend adding automatically generated files like this to .gitignore
so that they are not tracked in your repository. This will will reduce commit noise over the long run, though it does mean that you will need to run the asset pipeline on each deployment - this is something you will likely be doing anyway.
The --target
argument tells esbuild what format of javascript you want to compile to. This can be either a language version such as "es6" or "es2020", or a list of browser engines: "chrome", "firefox", "safari", etc. The default value is "esnext", which is very cutting edge. At the moment I am using the "es6" target for most of my projects, but that will likely change in the future.
The --bundle
flag tells esbuild to bundle all of the javascript together, rather than linking out to separate files in the node_modules
folder. I consider this to be optional for local development, but it is a requirement for production builds; without it your javascript will break in production, or behave unexpectedly at the very least.
The --sourcemap
flag will trigger the creation of a source map file which can be very helpful when doing local development. The minify
flag triggers automatic code minimization, as you can imagine.
The --define
argument has many uses; here we are telling esbuild to operate with the NODE_ENV=production
environment variable in place. Note the extra escape slashes - these are required to ensure that the value is parsed correctly by the binary.
PostCSS is "a tool for transforming CSS with javascript". It is incredibly versatile, and it is the engine that powers TailwindCSS. You will find that PostCSS is the best way to start working with next generation CSS code in your projects today. Most of the features you might find useful in SASS or LESS can be replicated with PostCSS fairly easily; it is possible that you could remove those tools entirely from your project and just use PostCSS instead.
We will use the PostCSS-CLI tool to run our css files through postcss and generate our compiled styles file. We can install both PostCSS and the CLI tool like so:
npm install postcss postcss-cli --save-dev
Before we can run the CLI build tool we have to define a configuration file that will tell PostCSS how to operate.
module.exports = {
plugins: {
tailwindcss: {},
autoprefixer: {},
}
}
Here we are registering two plugins: Tailwind and Autoprefixer. These will need to be installed via NPM before you can use them in your asset pipeline. There are many plugins available in the PostCSS ecosystem; the possibilities are endless.
To create a css file for local development we can run this command:
./node_modules/.bin/postcss resources/css/app.css --output public/css/app.css --map --verbose
As with esbuild, the first argument we pass to the binary is the file we want to have processed. This is the entry point into our CSS. You could also point it to a directory if you want to combine multiple files together.
The --output
argument names the destination for our compiled CSS. The --map
flag tells PostCSS to generate a source map. The --verbose
option tells the CLI to print contextual information to the terminal; without it the command output is very minimal.
To compress our styles in production we will use cssnano, which is a PostCSS plugin for minimizing CSS. We will only want to run minimization for production builds; to do that we will need to modify our postcss.config.js
file to provide it with some context:
module.exports = (ctx) => ({
plugins: {
tailwindcss: {},
autoprefixer: {},
cssnano: ctx.env === 'production' ? {} : false,
},
})
This tells PostCSS to only use the cssnano plugin when the NODE_ENV
environment variable is set to 'production'. To generate our production build we can modify our build command like so:
NODE_ENV=production ./node_modules/.bin/postcss resources/css/app.css --output public/css/app.css --verbose
The CLI tool has many more options that are worth investigating.
Our local development experience can improve quite a bit if we can have our asset pipeline monitor our resource files and re-compile them automatically when they are changed. Fortunately, both esbuild and the postcss-cli provide options for watching files and compiling them automatically for us.
For esbuild we can add a --watch
flag to monitor our javascript files:
./node_modules/.bin/esbuild resources/js/app.js --outfile=public/js/app.js --target=es6 --bundle --sourcemap --watch
The PostCSS-CLI tool has the same option:
./node_modules/.bin/postcss resources/css/app.css --output public/css/app.css --map --verbose --watch
If we define a separate NPM script for each of these commands we can then run them both at the same time:
npm run watch:js & npm run watch:css
With that in place we should have everything we need to work on our application locally and then deploy it to production.
Here is a package.json
file that includes all of the dependencies we have discussed and the scripts we have defined:
{
"private": true,
"scripts": {
"build:css": "NODE_ENV=production postcss resources/css/app.css --output public/css/app.css --verbose",
"build:js": "esbuild resources/js/app.js --outfile=public/js/app.js --target=es6 --bundle --minify --define:process.env.NODE_ENV=\\\"production\\\"",
"build": "npm run build:js && npm run build:css",
"local:css": "postcss resources/css/app.css --output public/css/app.css --map --verbose",
"local:js": "esbuild resources/js/app.js --outfile=public/js/app.js --target=es6 --bundle --sourcemap",
"local": "npm run local:js && npm run local:css",
"watch:css": "postcss resources/css/app.css --output public/css/app.css --map --verbose --watch",
"watch:js": "esbuild resources/js/app.js --outfile=public/js/app.js --target=es6 --bundle --sourcemap --watch",
"watch": "npm run watch:js & npm run watch:css"
},
"dependencies": {
"alpinejs": "^2.8.1"
},
"devDependencies": {
"autoprefixer": "^10.2.5",
"esbuild": "^0.9.4",
"postcss": "^8.2.8",
"postcss-cli": "^8.3.1",
"tailwindcss": "^2.0.4"
}
}
Notes:
.gitignore
file and then rebuild them as part of your deployment process.build
and local
scripts above we separate our commands with two ampersands; this means that if the first command fails the second one will not run.watch
script above we separate our commands with one ampersand; this means they will be run simultaneously.mix()
helper in your template files.The final step in setting up our asset pipeline will be implementing asset versioning for browser cache busting. This, however, will be a topic for a separate post.
]]>There are two primary options available to us: 1) We can configure our code editor to format code automatically, or 2) We can use a git "pre-commit" hook to run a formatting script for us automatically. In this post I will be discussing the later.
Git hooks are configurable opportunities to run custom commands when performing common tasks with git. Within the .git
folder in your project there is a folder called "hooks". This folder will store our bash scripts that we want git to trigger for us. If you take a look in that folder now you will see a handful of example files.
Give your script the name of a lifecycle hook and that script will be run when the hook is called. There are several lifecycle hooks available, each serving a slightly different purpose. The "pre-commit" hook is the best candidate for us; it is run just before a commit is recorded. We can use that hook to trigger our code formatter and the changes will be included in the commit record. Let's set that up now.
The first thing to think about is where you want to keep your script. You have the option of keeping it directly in the .git/hooks
folder, but it won't be tracked as part of the rest of your repository. I prefer to keep the file somewhere in the regular part of the repo and then use a symbolic link to add it to the hooks folder. This way the script is under version control and new developers can use it themselves in the future if they so desire.
Create a file called pre-commit.sh
and store it somewhere convenient. Then set up a symbolic link:
$ ln -s ../../pre-commit.sh .git/hooks/pre-commit
You will also need to make sure the file is executable:
$ chmod +x pre-commit.sh
Now we need to decide what we want this script to do.
For the sake of this demonstration let's assume you are working on a PHP project and you want to use the php-cs-fixer package to perform automatic formatting on the php files in your repository. A very minimal version of this script could look something like this:
#!/usr/bin/env bash
vendor/bin/php-cs-fixer fix
This will read your .php_cs
configuration file and apply those rules to all of the php files in your codebase before every commit. This is essentially the same as calling the tool directly from the command line - the usage is exactly the same.
I am currently working on a project using a version of php that I don't have installed on my development machine. We can use docker to provide the same functionality without having to install multiple versions of PHP on the host machine. I have created a set of generic PHP docker images that is ideal for this purpose. Here is a version of that same formatting script that uses docker instead of the locally installed PHP instance:
#!/usr/bin/env bash
docker run --rm \
-u 1000 \
-v $(pwd):/var/www \
-w /var/www \
stagerightlabs/php-test-runner:7.4 /bin/bash -c "vendor/bin/php-cs-fixer fix --quiet"
Most of the usage here is specific to these particular docker images; your situation may be different. The -v $(pwd):/var/www
argument mounts the current directory into the docker image at /var/www
. -w /var/www
sets the active working directory. We then call php-cs-fixer via the container, using the "quiet" flag to hide the output messages.
I find that this works very well; however the formatter is still looking at all the php files in the repo, which can be time consuming even with caching in place. This technique, from Sergey Protko, will limit the formatter to only the files that are going to be committed:
#!/usr/bin/env bash
CHANGED_FILES=$(git diff --cached --name-only --diff-filter=ACM -- '*.php')
if \[ -n "$CHANGED_FILES" \]; then
vendor/bin/php-cs-fixer fix $CHANGED_FILES;
git add $CHANGED_FILES;
fi
With a script like this in place you will never have to worry about keeping up with formatting your codebase manually, or adding extra commits just for formatting changes. Take a look at the other git hooks that are available as well - they can be a very powerful tool and a great addition to your tool belt.
]]>Let's take a look at hosting an Elixir application on a server provisioned by Forge.
TLDR: https://gist.github.com/rydurham/41904723ab07d8d60fa8295ee6f64822
Once Forge has created a new server for us, we will first need to install Elixir and Erlang. Connect to the server via SSH and then run the Elixir installation steps:
~$ wget https://packages.erlang-solutions.com/erlang-solutions_2.0_all.deb && sudo dpkg -i erlang-solutions_2.0_all.deb
~$ sudo apt update
~$ sudo apt install esl-erlang
~$ sudo apt install elixir
~$ rm erlang-solutions_2.0_all.deb
You could also set this task up as a recipe if you want to run it again on new servers down the road.
We can verify the installation by running:
~$ elixir -v
Erlang/OTP 23 [erts-11.1.4] [source] [64-bit] [smp:1:1] [ds:1:1:10] [async-threads:1] [hipe]
Elixir 1.11.2 (compiled with Erlang/OTP 23)
We can now create the site in our Forge management panel just like any other site. Go to the overview page for your new server and use the "Add site" tool to set up your new site. Select "Static HTML" as your project type, and the project root ("/") as your web directory. Once the site has been set up don't run a deployment just yet; we will first need to modify the nginx configuration and the deployment script to work with Elixir releases.
As noted in the Elixir documentation:
A release is a self-contained directory that consists of your application code, all of its dependencies, plus the whole Erlang Virtual Machine (VM) and runtime.
When we create a release it contains everything needed to run our application in a single directory with a corresponding binary. This binary can then be run as a daemon process listening for requests on a system port. The only catch is that you have to build the binary on the same type of system that you will be hosting it on. In our case, we will use a forge deployment script to build our release and start it as a daemon process every time we want to update the code.
Configuring an Elixir application for release is outside the scope of this post, but the Elixir documentation covers the topic very well. If you are working with Phoenix there is also excellent information about releases in that documentation as well.
The default nginx site configuration provided by Forge is very comprehensive; we only need to make a few small modifications to get things working the way we want. We will be using a reverse proxy to get nginx to forward web traffic to our application daemon.
First we will add the reverse proxy. Put this above the server
block, near the top of the file:
upstream site {
server localhost:4000
}
"site" is a shorthand name for the service we are setting up. You should use something more specific to your application. Also, I am using port 4000 here, but you can configure your Elixir release to use any port you would like.
Now we can update the server
block itself. As of this writing, there are two location
blocks in the default nginx configuration. One is for handling the site root at /
and the other is specifically for handling php files. Remove the second block (location ~ \.php$
) completely; we won't be needing it.
Update the contents of the location /
block like so:
location / {
proxy_http_version 1.1; # Required for phoenix channel websocket negotiation
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://site;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
Here we are configuring our proxy and then using the proxy_pass http://site
line to forward requests to our upstream server on port 4000. Replace "site" with whatever name you used for your upstream service.
As of the time of this writing we need to enforce the use of HTTP1.1 to allow websocket negotiations to work in the way that Elixir and Phoenix expect them to. Most of the proxy configuration listed here is used for that purpose.
With those changes in place you can save the script and restart nginx. Now, on to the deployment script.
A deployment script outlines the actions that Forge will perform whenever you request a site deployment. The default deployment script generated for new sites is geared towards PHP applications. Delete everything that is there and replace it with this:
cd /home/forge/www.example.com
# Fetch the latest version of the code
git pull origin main
# Ensure we have access to mix
if ! [ -x "$(command -v mix)" ]; then
echo 'Error: Elixir is not installed.' >&2
exit 1
fi
if ! [ -d /home/forge/site_logs ]; then
mkdir -p /home/forge/site_logs
fi
# Ensure we have access to hex and rebar
mix local.hex --force
mix local.rebar --force
# Install dependencies
mix deps.get --only prod
git checkout mix.lock
MIX_ENV=prod mix compile
# Compile assets
npm install --no-save --prefix ./apps/site_web/assets
npm run deploy --prefix ./apps/site_web/assets
MIX_ENV=prod mix phx.digest /home/forge/www.example.com/apps/site_web/priv/static
# Run the migrations
# MIX_ENV=prod mix ecto.migrate
# Generate the release
MIX_ENV=prod mix release production --overwrite
# Stop the existing process if it exists
_build/prod/rel/production/bin/production stop
# Start the release as a daemon process
RELEASE_TMP=/home/forge/rcd_logs _build/prod/rel/production/bin/production daemon
# Log the new OS PID
_build/prod/rel/production/bin/production pid
Let's break it down into chunks:
cd /home/forge/www.example.com
git pull origin main
First we move into the project's root directory and pull in the latest version of the code with git. Make sure you specify whichever repo branch you are using for deployments here. I am using "main".
if ! [ -x "$(command -v mix)" ]; then
echo 'Error: Elixir is not installed.' >&2
exit 1
fi
We won't be able to get very far if we can't use the mix
tool provided by Elixir. Here we are checking to make sure it is available. If it isn't then it is likely that something went wrong when we installed Elixir on the server.
if ! [ -d /home/forge/rcd_logs ]; then
mkdir -p /home/forge/rcd_logs
fi
We are going to use a separate directory for storing our application logs. Here we are making sure that the directory exists. If it doesn't we will create it.
mix local.hex --force
mix local.rebar --force
We will need both hex
and rebar
to manage our Elixir dependencies and build our release. Here we are checking to make sure they are available to us.
mix deps.get --only prod
git checkout mix.lock
MIX_ENV=prod mix compile
Here we are fetching our production dependencies and compiling them for use in our production environment. The mix.lock
file might drift a bit after fetching the dependencies; I am reverting the file to prevent any changed files from stoping a git pull
in the future.
npm install --no-save --prefix ./apps/site_web/assets
npm run deploy --prefix ./apps/site_web/assets
MIX_ENV=prod mix phx.digest /home/forge/www.example.com/apps/site_web/priv/static
Now we are compiling our front end assets. You will need to update the prefix
value to point to your own asset directory. The phx.digest
command prepares our static assets for use with our release binary.
# MIX_ENV=prod mix ecto.migrate
If you want to automatically run new database migrations you can uncomment this line. I tend to prefer to run migrations manually, but the choice is yours.
MIX_ENV=prod mix release production --overwrite
This is where we build our new release binary. The --overwrite
flag tells Elixir to replace the currently tagged release rather than creating a new one with a new version number. You may decide that you want to keep your old releases and tag a new build version for each release; this will require that you update your release configuration in the application for each deployment.
_build/prod/rel/production/bin/production stop
If we have an existing release running this command will stop it. If there is no existing release this command will error, but that doesn't matter for our purposes.
RELEASE_TMP=/home/forge/rcd_logs _build/prod/rel/production/bin/production daemon
Here we start up a new daemon process with the newly built binary. Note that we are setting an environment variable that tells the binary where to put its log files - this is the same folder path we created earlier.
The exact path to the binary will depend on your release configuration.
_build/prod/rel/production/bin/production pid
I like to include the newly started process ID in the deployment log; this is optional.
With this deployment script in place you can now use Forge to automatically build and deploy Elixir application releases on-demand. How neat is that? Forge is a remarkable tool.
There are a couple things to keep in mind:
Let's start by displaying the coordinates of the camera position within the scene. Create a new ControlPanel.vue
component in your src/components/
directory.
We will use a Vuex getter to retrieve the camera position and display it on the control panel:
getters: {
CAMERA_POSITION: state => {
return state.camera ? state.camera.position : null;
}
},
Let's import this getter into our Control Panel component:
import { mapGetters, mapMutations } from "vuex";
export default {
data () {
return {
axisLinesVisible: true,
pyramidsVisible: true
};
},
computed: {
...mapGetters(["CAMERA_POSITION"])
},
// ...
}
You can see that we have also added two boolean flags to the component's data object. Later on we will use these to toggle the visibility of the pyramids and axis lines in our scene.
By mapping our vuex CAMERA_POSITION
into the Control Panel's computed data object, we can display those coordinates and they will update in real time:
<div
v-if="CAMERA_POSITION"
class="border-b border-grey-darkest mb-2 pb-2"
>
<p class="mb-1 text-grey-light font-bold">
Camera Position
</p>
<p class="flex justify-between w-full mb-2 text-grey-light">
X:<span class="text-white">{{ CAMERA_POSITION.x }}</span>
</p>
<p class="flex justify-between w-full mb-2 text-grey-light">
Y:<span class="text-white">{{ CAMERA_POSITION.y }}</span>
</p>
<p class="flex justify-between w-full mb-2 text-grey-light">
Z:<span class="text-white">{{ CAMERA_POSITION.z }}</span>
</p>
<!-- more... -->
</div>
(Check out the project repo to see how this component has been styled with Tailwind utility classes.)
I have found that the trackball control implementation in Three.js can be counter-intuitive at times. It is very easy for the user to end up somewhere they did not intend to go. Let's add a button to our control panel that will reset the camera position to origin. We will do that with (you guessed it) a Vuex mutation:
mutations: {
// ...
SET_CAMERA_POSITION(state, { x, y, z }) {
if (state.camera) {
state.camera.position.set(x, y, z);
}
},
RESET_CAMERA_ROTATION(state) {
if (state.camera) {
state.camera.rotation.set(0, 0, 0);
state.camera.quaternion.set(0, 0, 0, 1);
state.camera.up.set(0, 1, 0);
state.controls.target.set(0, 0, 0);
}
},
// ...
}
Note that resetting the camera position requires more than just changing the camera position. We also have to account for the rotation of the camera around three additional axis. (Check out the Three.js documentation for more details.)
Now that the mutation is in place, let's add it to our Control Panel:
<p class="flex items-center">
<button
class="bg-grey-light cursor-pointer shadow p-2 mx-auto"
@click="resetCameraPosition"
>
Reset Camera
</button>
</p>
methods: {
...mapMutations([
"SET_CAMERA_POSITION",
"RESET_CAMERA_ROTATION",
]),
resetCameraPosition() {
this.SET_CAMERA_POSITION({ x: 0, y: 0, z: 500 });
this.RESET_CAMERA_ROTATION();
},
// ...
}
Excellent! Everything is turning out very well so far. To wrap things up we will allow the user to manipulate what they see by selectively hiding the content of the scene. We will have two toggles in the control panel: one to control the pyramids and one to control the axis lines. Each will get it's own vuex mutation.
First the axis lines:
mutations: {
// ..
HIDE_AXIS_LINES(state) {
state.scene.remove(...state.axisLines);
state.renderer.render(state.scene, state.camera);
},
SHOW_AXIS_LINES(state) {
state.scene.add(...state.axisLines);
state.renderer.render(state.scene, state.camera);
},
// ..
}
Next, the pyramids:
mutations: {
// ...
HIDE_PYRAMIDS(state) {
state.scene.remove(...state.pyramids);
state.renderer.render(state.scene, state.camera);
},
SHOW_PYRAMIDS(state) {
state.scene.add(...state.pyramids);
state.renderer.render(state.scene, state.camera);
}
}
It is important to note here that we are only able to do this because we are keeping track of the scenery in our application state, separate from the camera and the rendered scene. If we had generated the scenery and used it to render the scene without saving it anywhere this would not be possible.
We can now import these methods into our Control Panel:
<p class="flex items-center justify-between mb-1">
Pyramids
<input
type="checkbox"
name="pyramids"
id="pyramids"
v-model="pyramidsVisible"
@click="togglePyramids"
/>
</p>
<p class="flex items-center justify-between">
Axis Lines
<input
type="checkbox"
name="axis-lines"
id="axis-lines"
v-model="axisLinesVisible"
@click="toggleAxisLines"
/>
</p>
methods: {
...mapMutations([
"SET_CAMERA_POSITION",
"RESET_CAMERA_ROTATION",
"HIDE_AXIS_LINES",
"SHOW_AXIS_LINES",
"HIDE_PYRAMIDS",
"SHOW_PYRAMIDS"
]),
resetCameraPosition() {
this.SET_CAMERA_POSITION({ x: 0, y: 0, z: 500 });
this.RESET_CAMERA_ROTATION();
},
toggleAxisLines() {
if (this.axisLinesVisible) {
this.HIDE_AXIS_LINES();
this.axisLinesVisible = false;
} else {
this.SHOW_AXIS_LINES();
this.axisLinesVisible = true;
}
},
togglePyramids() {
if (this.pyramidsVisible) {
this.HIDE_PYRAMIDS();
this.pyramidsVisible = false;
} else {
this.SHOW_PYRAMIDS();
this.pyramidsVisible = true;
}
}
}
You have now successfully used Vue, Vuex and Three.js to create and manage a three-dimensional scene in your browser. You can see this code in action here. This demonstration is somewhat simplistic, but the ideas here should provide a solid foundation for building a more realistic application.
]]>The completed code for this project is available here. You can also check out a demo. Our goal will be to convert this Three.js Example into a Vue application, and perhaps add a bit more functionality while we are at it.
To start, spin up a new vue application with the Vue CLI tool. (If you haven't done that before, check out this great tutorial.) For the sake of this tutorial you can choose any of the settings that you are comfortable with; just make sure you choose to have Vuex added to the project.
Next, we will add Three.js to the project with NPM. You can install it directly via NPM, however it does not yet play nicely with ES6 imports. To get around this, we can use the three-full
mirror, which delivers Three.js in a format that is much easier to use with ES6 style imports. Let's install that now:
$ npm install three-full
We can now get our development server up and running by using:
$ npm run serve
The first thing we will do is remove the HelloWorld.vue
file that came with your new application and we will also remove all references to that file from App.vue
. We can also remove the default styles from the bottom of the App.vue
file and replace them with this:
<style>
html,
body {
width: 100%;
height: 100%;
overflow: hidden;
}
body {
margin: 0px;
}
#app {
height: 100%;
}
</style>
Here we are removing the default padding from the html and body tags and making sure that our #app div will fill the entire visible browser window.
We will use a ViewPort
component to manage the canvas element generated by Three.js; let's create that now. Add a new file called ViewPort.vue
in your /src/components
directory. We can now import that into the main App.vue
component. Update the script
section in that file to look like this:
import ViewPort from "@/components/ViewPort.vue";
export default {
components: {
viewport: ViewPort,
}
};
And we can now use that component in our App.vue template:
<template>
<div id="app">
<viewport/>
</div>
</template>
Let's now turn our attention to getting Three.js up and running. Three.js renders three dimensional scenes on canvas elements; before you can render a scene you need to create Camera, Control and Scene objects, that you will then give to a WebGLRender to have the canvas element generated for you. Three.js provides tools for creating all of these objects and populating the scene with three dimensional objects.
We are going to let Vuex manage the various components of our Three.js scene; this will make it much easier for us to modify the scene contents once they have been created, as we will explore in part two. Open your store.js
file, and update the top of the file to look like this:
import Vue from "vue";
import Vuex from "vuex";
import {
Scene,
TrackballControls,
PerspectiveCamera,
WebGLRenderer,
Color,
FogExp2,
CylinderBufferGeometry,
MeshPhongMaterial,
Mesh,
DirectionalLight,
AmbientLight,
LineBasicMaterial,
Geometry,
Vector3,
Line
} from "three-full";
Vue.use(Vuex);
Here we are importing Vue, Vuex and a litany of Three.js objects that we will use to create our scene. We are going to use vuex mutations to manage the creation of the scenes, and a vuex action to trigger those mutations. The general rule of thumb is that actions are used to handle asynchronous updates, and mutations must always be used for synchronous updates. In a normal Vuex workflow, you might have an action that makes an http request, which would then hand off the response data to mutations to store that data in the vuex state, which is where everything is kept.
Let's start by adding some new items to our state object:
state: {
width: 0,
height: 0,
camera: null,
controls: null,
scene: null,
renderer: null,
axisLines: [],
pyramids: []
},
We will use width
and height
to keep track of the canvas size. camera
, controls
, scene
and renderer
will be use to store the tools generated by Three.js. axisLines
and pyramids
will be used to keep track of the visual elements used in our scene; the scenery if you will.
First up, lets create a mutation that sets the height and width of the canvas. By convention, all vuex method names are upper case:
mutations: {
SET_VIEWPORT_SIZE(state, { width, height }) {
state.width = width;
state.height = height;
},
}
Here we are receiving a width and a height and updating the state accordingly. Next, lets create our renderer:
mutations: {
// ...
INITIALIZE_RENDERER(state, el) {
state.renderer = new WebGLRenderer({ antialias: true });
state.renderer.setPixelRatio(window.devicePixelRatio);
state.renderer.setSize(state.width, state.height);
el.appendChild(state.renderer.domElement);
},
}
Here we are instantiating a new WebGLRenderer (provided by Three.js) and setting the width and height of the scene that we want to create. Note that we this function receives a reference to a dom element (often referred to as el
). The WebGLRenderer will create a canvas element for us, but it won't be visible unless we actually add it to the dom tree. el.appendChild
adds the canvas element as a child node the el
dom element.
Now lets create our camera:
mutations: {
// ...
INITIALIZE_CAMERA(state) {
state.camera = new PerspectiveCamera(
// 1. Field of View (degrees)
60,
// 2. Aspect ratio
state.width / state.height,
// 3. Near clipping plane
1,
// 4. Far clipping plane
1000
);
state.camera.position.z = 500;
},
}
Here we are creating a new PerspectiveCamera
object with four parameters: The field of view of the camera's "lens" in degrees, the aspect ratio of the camera's output, and the "clipping plane" boundaries. Anything further than 1000 units away from the camera will not be visible. Finally, we set the starting position of the camera at 500 units away from origin on the z-axis.
Now lets create our controls:
mutations: {
// ...
INITIALIZE_CONTROLS(state) {
state.controls = new TrackballControls(
state.camera,
state.renderer.domElement
);
state.controls.rotateSpeed = 1.0;
state.controls.zoomSpeed = 1.2;
state.controls.panSpeed = 0.8;
state.controls.noZoom = false;
state.controls.noPan = false;
state.controls.staticMoving = true;
state.controls.dynamicDampingFactor = 0.3;
},
}
Here we instantiate a new TrackballControls
object and set up a default configuration for it. The exact nature of this configuration is a bit beyond the scope of this tutorial, but you should feel free to play around with these values and see what happens.
The most important thing to note here is that we are passing in our canvas element as the second argument to the TrackballControls constructor. This will limit the controls to listen only for input events that occur on that dom element. If you don't provide this, it will default to listening to all input events on the entire document which will effectively steal focus away from any other content on the page and translate all input into camera movements in the rendered scene. By limiting this to just the canvas element we will still be able to interact with other content on the page normally.
Next up, the scene content itself. This one is a doozy:
mutations: {
// ...
INITIALIZE_SCENE(state) {
state.scene = new Scene();
state.scene.background = new Color(0xcccccc);
state.scene.fog = new FogExp2(0xcccccc, 0.002);
var geometry = new CylinderBufferGeometry(0, 10, 30, 4, 1);
var material = new MeshPhongMaterial({
color: 0xffffff,
flatShading: true
});
for (var i = 0; i < 500; i++) {
var mesh = new Mesh(geometry, material);
mesh.position.x = (Math.random() - 0.5) * 1000;
mesh.position.y = (Math.random() - 0.5) * 1000;
mesh.position.z = (Math.random() - 0.5) * 1000;
mesh.updateMatrix();
mesh.matrixAutoUpdate = false;
state.pyramids.push(mesh);
}
state.scene.add(...state.pyramids);
// lights
var lightA = new DirectionalLight(0xffffff);
lightA.position.set(1, 1, 1);
state.scene.add(lightA);
var lightB = new DirectionalLight(0x002288);
lightB.position.set(-1, -1, -1);
state.scene.add(lightB);
var lightC = new AmbientLight(0x222222);
state.scene.add(lightC);
// Axis Line 1
var materialB = new LineBasicMaterial({ color: 0x0000ff });
var geometryB = new Geometry();
geometryB.vertices.push(new Vector3(0, 0, 0));
geometryB.vertices.push(new Vector3(0, 1000, 0));
var lineA = new Line(geometryB, materialB);
state.axisLines.push(lineA);
// Axis Line 2
var materialC = new LineBasicMaterial({ color: 0x00ff00 });
var geometryC = new Geometry();
geometryC.vertices.push(new Vector3(0, 0, 0));
geometryC.vertices.push(new Vector3(1000, 0, 0));
var lineB = new Line(geometryC, materialC);
state.axisLines.push(lineB);
// Axis 3
var materialD = new LineBasicMaterial({ color: 0xff0000 });
var geometryD = new Geometry();
geometryD.vertices.push(new Vector3(0, 0, 0));
geometryD.vertices.push(new Vector3(0, 0, 1000));
var lineC = new Line(geometryD, materialD);
state.axisLines.push(lineC);
state.scene.add(...state.axisLines);
},
}
Most of this comes directly from the Three.js example that we are emulating, however there are couple important differences to note:
Next we will create a mutation that will handle resizing the canvas element for us:
mutations: {
// ...
RESIZE(state, { width, height }) {
state.width = width;
state.height = height;
state.camera.aspect = width / height;
state.camera.updateProjectionMatrix();
state.renderer.setSize(width, height);
state.controls.handleResize();
state.renderer.render(state.scene, state.camera);
},
}
This should be pretty straight forward. When we want to resize the canvas we call this mutation and provide it with our new width and height. It then updates the camera and renderer accordingly and re-renders the scene using the new dimensions.
Finally, we will now set up two vuex actions that will orchestrate these various mutations for us. First we will use an action to initialize our scene on page load:
actions: {
INIT({ state, commit }, { width, height, el }) {
return new Promise(resolve => {
commit("SET_VIEWPORT_SIZE", { width, height });
commit("INITIALIZE_RENDERER", el);
commit("INITIALIZE_CAMERA");
commit("INITIALIZE_CONTROLS");
commit("INITIALIZE_SCENE");
// Initial scene rendering
state.renderer.render(state.scene, state.camera);
// Add an event listener that will re-render
// the scene when the controls are changed
state.controls.addEventListener("change", () => {
state.renderer.render(state.scene, state.camera);
});
resolve();
});
},
}
The init function returns a promise that resolves once all of our various Three.js components have been created and registered. It also takes care of the initial scene rendering and sets an event listener that will re-render the scene if the controls receive an input trigger.
Finally, we will use an action to set up our animation loop. This is a recursive function that re-renders the scene (if needed) during every tick of the event loop. We could use setTimeout
here, but requestAnimationFrame
does the same thing except that it will pause the animation loop if the browser looses focus. See more about animation frames here.
actions: {
// ..
ANIMATE({ state, dispatch }) {
window.requestAnimationFrame(() => {
dispatch("ANIMATE");
state.controls.update();
});
}
}
That should cover everything we need to get our 3D scene up and running. Now lets put it all together in our ViewPort
component. The template and the styling of this component are very straight forward:
<template>
<div class="viewport"/>
</template>
We create a single dom node (the root node for this component) which will become a wrapper around the canvas element generated by our WebGLRenderer.
<style>
.viewport {
height: 100%;
width: 100%;
}
</style>
Here we are setting the height and width of the component to be 100% of its parent node, which in this case happens to be the main #app div (the root node of the App.vue
component.) This will ensure that our canvas element uses the entire browser screen.
The real fun is in the script
section of the component:
import { mapMutations, mapActions } from "vuex";
export default {
data () {
return {
height: 0
};
},
methods: {
...mapMutations(["RESIZE"]),
...mapActions(["INIT", "ANIMATE"])
},
mounted () {
this.INIT({
width: this.$el.offsetWidth,
height: this.$el.offsetHeight,
el: this.$el
}).then(() => {
this.ANIMATE();
window.addEventListener("resize", () => {
this.RESIZE({
width: this.$el.offsetWidth,
height: this.$el.offsetHeight
});
});
});
}
};
To start, we import some helper methods from Vuex which will allow us to reference our vuex actions and mutations directly from this component. Next, when the component is mounted (on page load) we will trigger our INIT function, creating our three dimensional scene. When it is ready we will trigger our animation loop and set an event listener that triggers our RESIZE function whenever the browser window is resized.
That should be everything we need! Take a look at the URL being used by your dev server (usually http://localhost:8080/
) to see your beautifully rendered scene.
The next installment of this tutorial will investigate creating a control panel that will allow users to manually manipulate the rendered scene.
]]>After some trial and error, I have come up with a working .travis.yml
configuration.
language: php
php:
- 7.1
- 7.2
before_script:
- travis_retry composer self-update
- travis_retry composer install --prefer-source --no-interaction --no-suggest
- cp .env.travis .env
- php artisan key:generate
- php artisan migrate
- php artisan passport:keys
script:
- vendor/bin/phpunit
sudo: false
The first two sections are self-explanatory. We are going to be testing a PHP project, and we want Travis to test our code against PHP 7.1 and 7.2.
The before_script
section is where we outline how we want the testing environment to be prepared before the tests are run. Lets break this down line by line:
travis_retry composer self-update
travis_retry composer install --prefer-source --no-interaction --no-suggest
composer.lock
file that lives in our project repo.cp .env.travis .env
.env
file to ensure that the application being tested is configured correctly for this testing environment. See below for more details.php artisan key:generate
php artisan migrate
database/migrations
folder, rather than registering them separately. Regardless, these migrations include the tables that Passport uses to store its client data. It is also important to note that the project has been configured to use an in-memory sqlite database, which is why we don't have to create an empty database during these provisioning steps.php artisan passport:keys
storage/
directory.The script
section is where we tell Travis how to run our tests. Note that I am specifying that I want travis to use the version of PHPUnit that was installed via Composer, rather than the global version that it makes available to us in the container. This way I can always know for sure exactly which version of PHPUnit I am running, and we can run a newer version if we want to.
The last section, sudo: false
, is how we tell Travis to run our tests in a containerized environment. This often means that our tests will run faster (though not always.). The downside is that we don't have sudo
available to us when provisioning. If we were to need that option we would have to use the slower, non-container environment that Travis offers.
This is the special .env.travis
file I used to specify application specific testing configuration:
APP_NAME=ExampleLaravelApplication
APP_ENV=local
APP_KEY=
APP_DEBUG=true
APP_LOG_LEVEL=debug
APP_URL=http://localhost
DB_CONNECTION=sqlite
DB_DATABASE=:memory:
DB_USERNAME=homestead
DB_PASSWORD=secret
BROADCAST_DRIVER=log
CACHE_DRIVER=file
SESSION_DRIVER=file
SESSION_LIFETIME=120
QUEUE_DRIVER=sync
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
MAIL_DRIVER=log
MAIL_HOST=smtp.mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=null
MAIL_PASSWORD=null
MAIL_ENCRYPTION=null
The most important option here is the DB_DATABASE=:memory:
option. This tells the application that we want to use an in-memory sqlite database. Even though the APP_ENV
is "local" here, it will be over-written by the ENV values specified in the phpunit.xml
file. This is where we set the application environment to "testing" for the lifetime of the test run.
We also set some importan environment variables in the phpunit.xml
file:
<?xml version="1.0" encoding="UTF-8"?>
<phpunit backupGlobals="false"
backupStaticAttributes="false"
bootstrap="vendor/autoload.php"
colors="true"
convertErrorsToExceptions="true"
convertNoticesToExceptions="true"
convertWarningsToExceptions="true"
processIsolation="false"
stopOnFailure="false">
<testsuites>
<testsuite name="Feature">
<directory suffix="Test.php">./tests/Feature</directory>
</testsuite>
<testsuite name="Unit">
<directory suffix="Test.php">./tests/Unit</directory>
</testsuite>
</testsuites>
<filter>
<whitelist processUncoveredFilesFromWhitelist="true">
<directory suffix=".php">./app</directory>
</whitelist>
</filter>
<php>
<env name="APP_ENV" value="testing"/>
<env name="CACHE_DRIVER" value="array"/>
<env name="SESSION_DRIVER" value="array"/>
<env name="QUEUE_DRIVER" value="sync"/>
<env name="DB_CONNECTION" value="testing" />
<env name="PASSPORT_CLIENT_SECRET" value="Dtg9myGAIsTUbTck2kxrxeZ5TDnE1qbVvTeYIuPN" />
</php>
</phpunit>
There are two important things to note about this file:
APP_ENV
environment variable to "testing". This puts our Laravel application into testing mode, which disables certain features like CSRF checks; which can be annoying to deal with when testing.PASSPORT_CLIENT_SECRET
value. This value matches the oauth secret used in our test fixtures when we create simulated Password Clients for our authentication tests. The application I am testing uses an environment variable to store the ID of the password grant client we use to decode Json Web Tokens; we want to simulate that in our testing environment. You may not need to do this if you are using Passport differently.Thats it! Once you understand how the pieces fit together it is not as complex as it first seems.
]]>Just to mix things up, I thought I would also try taking Tailwind CSS for a spin as well. I may have to re-work quite a bit of the design template to integrate it with the Tailwind framework, but if it works it will be worth the effort.
Here are the steps I took to get my dev setup up and running:
Start by installing the Vue CLI tool and then using it to generate a new site:
$ vue init webpack homepage
I used all the default answers to the prompts, and I had Vue CLI run npm install
for me. Next, after moving my terminal session into the new project directory, I installed Sass and Tailwind CSS:
$ npm install node-sass sass-loader tailwindcss --save-dev
and then I initialized my new tailwind.js
configuration file:
$ ./node_modules/.bin/tailwind init tailwind.js
We also need to update the .postcssrc.js
file to have Post CSS trigger the tailwind compliation step:
module.exports = {
plugins: [require("tailwindcss")("tailwind.js"), require("autoprefixer")()]
};
Now everything has been installed, we just need to get it all working together. Start by creating an app.scss
file in a new /src/assets/scss/
directory, and then pasting in the tailwind base directives. In the example below I have removed the comments to save space.
@tailwind preflight;
/* Custom Sass goes here... */
@tailwind utilities;
Next, locate the "style" section in the App.vue
file and replace it with this:
<style lang="sass">
@import "./assets/scss/app.scss"
</style>
Note that when using Sass with the Vue Style loader (which allows for using Sass in Vue Component files) you have to be specific here about which type of sass files you will be using. If you specify lang="scss"
the loader will assume you are using the "indentation" style of sass files, whereas lang="sass"
will indicate the use of bracket style sass files.
At this point, you should be able to run npm run dev
to compile the assets and launch a dev server.
To verify that everything is working, try updating your app.scss file to look like this:
@tailwind preflight;
#app {
@apply .text-center .text-grey-darker .mt-8;
font-family: 'Avenir', Helvetica, Arial, sans-serif;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
@tailwind utilities;
If everything is working correctly, wepback will automatically recompile that file and refresh the dev server content, and you should see that we are now successfully using Tailwind CSS directives to manipulate the content on the home page.
]]>As you may know, the form data validation provided by Laravel is very powerful and robust. However, it occurred to me the other day that there is one area that still feels a bit messy and overly complicated. When displaying error messages to users, some developers prefer to show all of the error messages at the top of the form - this is very easy to do and Laravel makes it a snap to implement. However, other developers prefer to show the errors within the body of the form, so that each message appears next to its corresponding input. To do that, you have to set up your form inputs to look something like this:
<div class="form-group">
<label for="input-title">Title</label>
<input type="text" name="title" class="form-control {{ $errors->has('title') ? 'is-invalid' : '' }}" id="input-title">
{!! $errors->has('subtitle') ? $errors->first('subtitle', '<div class="invalid-feedback">:message</div>') : '' !!}
</div>
In this example, which makes use of Bootstrap 4, we first check the $errors
Message Bag to see if this input has any messages available. If so, we add the is-invalid
class to the input. After that we check the errors again to see if this input has any messages, and then we display that message. By default, the messages are delivered back as plain text, but by passing in a formating method we get back the message as html formatted as indicated. This is very handy when working within a CSS framework that has specific requirements about how validation messages should be displayed.
This feels messy to me. There is a lot going on there, and having to specify the message format repeatedly for each input on the form seems needlessly redundant. How can we simplify this? How about a custom blade directive?
Add this to your AppServiceProvider
(or any place that is loaded before views are rendered):
Blade::directive('error', function($key) {
$key = str_replace(['\'', '"'], '', $key);
$errors = session()->get('errors') ?: new \Illuminate\Support\ViewErrorBag;
if ($message = $errors->first($key)) {
return "<?php echo '<div class=\"invalid-feedback\">{$message}</div>'; ?>";
}
});
This allows us to simplify our input like so:
<div class="form-group">
<label for="input-title">Title</label>
<input type="text" name="title" class="form-control {{ $errors->has('title') ? 'is-invalid' : '' }}" id="input-title" >
@error('title')
</div>
We have wrapped up our error message display logic a single reusable bundle. When we receive the key from the blade compiler it is delivered as a raw string that includes the single quotes, like so: 'title'
. The first step is stripping out the quotation marks. After that we resolve the error message bag out of the session. If there is no message bag available we new one up instead. After that we return the content of the validation message formatted, in this case, for Bootstrap 4.
This is all well and good, but I think we can take it one step further. Most applications have a helpers file for collecting utility functions. (I am a big fan of this technique.) If you have one available, toss this in there:
if (! function_exists('hasError')) {
/**
* Check for the existence of an error message and return a class name
*
* @param string $key
* @return string
*/
function hasError($key)
{
$errors = session()->get('errors') ?: new \Illuminate\Support\ViewErrorBag;
return $errors->has($key) ? 'is-invalid' : '';
}
}
It would be nice to set this up as a blade directive as well but because we want to keep it on the same line as the input tag itself, a blade directive is not an option; a helper function makes more sense. I am not quite sure that hasError
is the right name for this - if you have any inspired ideas let me know in the comments. Regardless, with that method in place, we can now update our input like so:
<div class="form-group">
<label for="input-title">Title</label>
<input type="text" name="title" class="form-control {{ hasError('title') }}" id="input-title" >
@error('title')
</div>
This feels much cleaner to me.
It is important to note that by using this blade directive we are not actually changing how messages are displayed - the methodology remains the same. All we are doing is isolating some of the clutter in the template file and hopefully making it easier to read and understand by future developers.
While we are at it, there is one more custom directive you might find useful:
Blade::directive('errors', function() {
$errors = session()->get('errors') ?: new \Illuminate\Support\ViewErrorBag;
return "<?php echo '<pre>" . print_r($errors->getMessages(), true) . "</pre>'; ?>";
});
This is a utility method for quickly echoing all of the currently available error messages to the screen, when used like this:
@errors
If you ever find yourself with a form that is not passing validation but no error messages are being displayed, this tool can tell you if there are any messages not being displayed.
You might think that these ideas would be good candidates for inclusion in the Laravel Framework directly, but I disagree. If these were provided by the framework they would be much harder to customize between applications, requiring more layers of code and cognitive overhead to keep working properly. It is much simpler to just toss them in to an application when needed and customize them directly. No muss no fuss. This is exactly why Taylor set up the ability to create custom Blade directives in the first place.
If you run into any trouble adding this to your project, remember that the docs recommend flushing your view cache any time you edit your blade directives, via php artisan view:clear
.
Every Laravel 5.* has an App/Exceptions/Handler
class that allows us to customize how exceptions are reported and how they are displayed to the user (More info here.) This class extends a framework class called Illuminate\Foundation\Exceptions\Handler
which is where most of the specifics of how errors are handled can be found. In our application Handler
class, there are two key methods: report()
and render()
. As the documentation tells us, the render method determines how an exception should be shown to a user. This is important because we want to be able to display an error in a way that makes the most sense for the context in which it was encountered. An ajax call will most likely want a json response, whereas an http call will want an html response. A command line exception might want something completely different. The default reporting method most likely already does 99% of what you might want it to do.
For now, we are more interested in the report()
method. This method is specifically intended as a tool for customizing how we log errors. By default, Laravel disables the reporting of Http errors (such as 404) by including Symfony\Component\HttpKernel\Exception\HttpException::class
in the $dontReport
list. We could just comment out that line, but that won't acually help us accomplish our goal. When a 404 error is handled to the logger, you get something like this:
[2017-03-17 22:51:00] local.ERROR: Symfony\Component\HttpKernel\Exception\NotFoundHttpException in /home/vagrant/example.com/vendor/laravel/framework/src/Illuminate/Routing/RouteCollection.php:161
Stack trace:
#0 /home/vagrant/example.com/vendor/laravel/framework/src/Illuminate/Routing/Router.php(533): Illuminate\Routing\RouteCollection->match(Object(Illuminate\Http\Request))
... lots of other stuff ...
#14 /home/vagrant/example.com/public/index.php(53): Illuminate\Foundation\Http\Kernel->handle(Object(Illuminate\Http\Request))
Which doesn't actually tell us what url has caused the problem. Instead, lets update our custom report()
method to give us more details:
use Symfony\Component\HttpKernel\Exception\NotFoundHttpException;
//...
public function report(Exception $exception)
{
if ($exception instanceof NotFoundHttpException && $request = request()) {
Bugsnag::notifyError('404', 'Page Not Found', function ($report) use ($request) {
$report->setSeverity('info');
$report->setMetaData([
'url' => $request->url()
]);
});
}
parent::report($exception);
}
Here we check the exception type before handing it off to the parent class report() method. If it is an instance of NotFoundHttpException
and we have access to a valid request object, we log the url that has caused the problem. I am using Bugsnag in this example, but you could just as easily use another service or the default Monolog logger that Laravel provides. The main benefit to this method is that it allows us to log exactly the information we are looking for in whatever way we want to.
NotFoundHttpException
is a Symfony class that represents a 404 error. Laravel uses the Symfony Http handler behind the scenes. There are lots of features in Laravel that are provided by the Symfony classes used by Laravel, and most of the are essentially undocumented - at least within the Laravel ecosphere. I highly reccomend digging into the source code where you can to discover helpful gems and insights.
Being on 4.2, it does not have access to all of the great testing conveniences found in Laravel 5.*, which is unfortunate because it means I didn't have access to any of the new test helpers. Codeception is a great alternative testing package that provides a wealth of functionality and it is already compatible with Laravel 4 - this seemed like the best choice for us. So far, it has been working quite well for us, but I did run into one particularly frustrating problem: Binding mocks of service classes to the IoC does not work "out of the box". This is frustrating if you have a service class that hits a third-party API and you don't want to have to hit that API each time you run your tests.
As it turns out, this problem is due to the Laravel4 Module re-initializing the Application object on each request. While it looks like this should work, it actually doesn't:
$I = new FunctionalTester($scenario);
$I->wantTo('upgrade a subscription plan');
$I->amActingAs('testuser@example.com'); // This is a custom helper method which sets the active user
// Establish the necessary mocks
$mockBillingManager = Mockery::mock('Acme\Billing\StripeBillingManager');
// Make sure to set the expectations of the mock object before binding it to the IoC
$I->getApplication()->bind('Acme\Billing\StripeBillingManager', $mockBillingManager);
$I->amOnRoute('billing.upgrade.form');
$I->submitForm('subscription-form', [
'plan' => 'premium',
'token' => csrf_token()
]);
$I->seeInDatabase('users', ['email' => 'testuser@example.com', 'plan' => 'premium');
When the amOnRoute()
method is executed, the Application is refreshed and all of your custom bindings are lost in favor of the bindings you establish in your service provider(s). This is triggerd by the doRequest()
method on the Codeception\Lib\Connector\Laravel4
class, which is used by default in the Laravel4 module. The solution that worked for me was to extend the Laravel4 module and its connector class. This allowed me to remove the offfending line from the doRequest()
method. Also, the Laravel4 module does not provide any methods for helping with temporary binding; creating a custom module allowed me to add some convenient methods to help with this.
Jordan Eldredge has a great article about writing custom Codeception modules. Using the methods he describes in his post we can create a custom Codeception Module and a new Connector library which is identical to the existing Laravel4 connector library. Only one change is required to the existing connector library, on line 47:
// New class based on Codeception\Lib\Connector\Laravel4
class CustomLaravel4Connector {
// Most everything remains the same...
/**
* @param Request $request
* @return Response
*/
protected function doRequest($request)
{
//$this->initialize(); -- Remove this line
return $this->kernel->handle($request);
}
// Don't change anything else
}
Now we can create a custom module that extends the Laravel4 module and add some new functionality:
namespace Codeception\Module;
use CustomLaravel4Connector;
class CustomLaravel4 extends \Codeception\Module\Laravel4
{
/**
* Initialize hook.
*/
public function _initialize()
{
$this->checkStartFileExists();
$this->registerAutoloaders();
$this->revertErrorHandler();
$this->client = new CustomeLaravel4Connector($this);
}
/**
* Allow the Codeception Actor to add a binding to the Laravel IOC
*
* @return \Illuminate\Foundation\Application
*/
public function bindService($abstract, $instance, $shared = false)
{
$this->app->bind($abstract, $instance, $shared = false);
}
/**
* Allow the Codeception Actor to bind an instantiated object to the Laravel IOC
*
* @return \Illuminate\Foundation\Application
*/
public function bindInstance($abstract, $instance)
{
$this->app->instance($abstract, $instance);
}
}
Presto! That is all there is to it. Now your mock objects can be bound appropriately and will remain in place for the duration of the test. Our new acceptance test looks like this:
$I = new FunctionalTester($scenario);
$I->wantTo('upgrade a subscription plan');
$I->amActingAs('testuser@example.com'); // This is a custom helper method which sets the active user
// Establish the necessary mocks
$I->bindService('Acme\Billing\BillingInterface', function(){
return Mockery::mock('Acme\Billing\StripeBillingManager');
});
$I->amOnRoute('billing.upgrade.form');
$I->submitForm('subscription-form', [
'plan' => 'premium',
'token' => csrf_token()
]);
$I->seeInDatabase('users', ['email' => 'testuser@example.com', 'plan' => 'premium');
I have added my custom module and connector to my Laravel-Testing-Utilities package if you would like to make use of them in your own Laravel 4.2 application. Currently they are only on the 1.* branch, but if needed I will add them to the other versions of that package down the road.
]]>First we should create a subdomain on the client's main site, specifically for this project: inbox.clientdomain.com
. This is where we will set up our Lumen Application. Next we need to point the MX records for that subdomain to Mandrill. Within your Mandrill dashboard, go to the "Inbound" section and add your new domain to the domain list there. Once that is done Mandrill will provide you with the necessary MX details, and you can also verify that you MX records have been set appropriately. Next, click on the "routes" button for your new domain. When email is received at inbox.clientdomain.com
we can assign the URL we want Mandrill to send it to. For this project I set up all mail received at "contact@inbox.clientdomain.com"
to be sent to the URL "http://inbox.clientdomain.com/contact"
. We could set up other email addresses on that domain to be sent to other places, but this is all we will need for now.
Here is the basic structure of what our micro-service will do:
First we need to install Lumen:
$ composer create-project laravel/lumen inbox --prefer-dist
This will install Lumen to the "inbox" folder in my projects directory. We are going to be using Facades and Eloquent in this project, and personally I prefer using the phpdotenv
config files, so we need to uncomment those lines in /bootstrap/app.php
to enable those features:
Dotenv::load(__DIR__.'/../');
// ...
$app->withFacades();
$app->withEloquent();
Now we need to pull in two additional packages via composer: illuminate/mail
and guzzlehttp/guzzle
:
$ composer require illuminate/mail
$ composer require guzzlehttp/guzzle
Lumen does not come with email functionality out of the box, so we are pulling that in here. Also, we need Guzzle to make use of the Mandrill API. Add a config file called /config/services.php
like so:
return array(
'mandrill' => [
'secret' => env('MANDRILL_API_KEY'),
],
);
Find the .env.example
file in your project's root folder and save it as ".env
". This is where we will keep our environmental configuration values. Add a line for your Mandrill API key:
MANDRILL_API_KEY=XXXXXXXXXXXXXX
You should add 32 character random App key while you are here, as well as making sure that your DB credentails are accurate.
If you want to store copies of your messages in your database, you should create a migration and set up that database now:
class CreateMessagesTable extends Migration
{
/**
* Run the migrations.
*
* @return void
*/
public function up()
{
Schema::create('messages', function (Blueprint $table) {
$table->increments('id');
$table->string('email');
$table->string('name');
$table->text('message');
$table->timestamps();
});
}
/**
* Reverse the migrations.
*
* @return void
*/
public function down()
{
Schema::drop('messages');
}
}
Now we are ready to set up our routing. In the file app/Http/routes.php
, add these route:
$app->post('/contact', [
'uses' => 'App\Http\Controllers\ContactController@newMessage'
]);
$app->get('/', function () use ($app) {
return response()->json(['ok'], 200);
});
The first route received the Mandrill Post data and sends it to the newMessage
method on our soon to be created ContactController. The second route is just a convenient way to ping the service and make sure it is running.
If this were a larger application, I would next suggest thinking about abstracting our code into a library for handling Mandrill Post Data, and then injecting that library into our ContactController. However, this is such a small service that I don't think we need to really worry about that. True, if we find that we want to handle multiple endpoints with this code, we will gain a lot by abstracting this code into a library that can be used wherever we need it (in keeping with the DRY spirit.) However, we are only concerned with one endpoint for now, so I am going to keep all of the logic within the newMessage
controller method.
Create a file called App/Http/Controllers/ContactController.php
with a method called newMessage
:
/**
* Handle a Mandrill Inbound API message
*
* @param Request $request
* @return Response
*/
public function newMessage(Request $request)
{
// Gather the POST data from Mandrill - if nothing is there, we can call it quits
$mandrillEvents = $request->input('mandrill_events', null);
if (!$mandrillEvents) {
return response()->json(['ok'], 200);
}
// Decode the WebHook data and get the text content of the email
$mail = json_decode($mandrillEvents);
$body = $mail[0]->msg->text;
// Extract the sender's email and the story from the body of the email
$senderEmail = $this->parseSenderEmail($body);
$senderName = $this->parseSenderName($body);
//Write the story to the DB
DB::table('messages')->insert([
'email' => $senderEmail,
'name' => $senderName,
'message' => $body,
]);
//Send a confirmation email to the submitter
$this->acknowledge($senderName, $senderEmail);
// Send a notification that a story was received
return $this->notify($body, $senderName);
}
First we gather the mandrill_events
Post data from the http request. Note that Lumen does not provide an "Input::" facade, we need to get our input directly from the $request
object. Per the Mandrill Inbound API, this is a json encoded array of data representing the email message. The $mail
array represents the decoded message data.
This email is being sent to us via WordPress, so we have quite a bit of controll over the format it is being sent to us in. We don't need to concern ourselves with the headers, or the sender's email address (in this case the sender is our WordPress installation.) All we need is the text from the body of the message, which we gather like so: $body = $mail[0]->msg->text;
. Now we have the text content of the message and we can do whatever we need to with it.
In my case, I have set up the WordPress contact form to include the Sender's name and email address within the body of the message. To extract them, I created two private methods (parseSenderEmail()
and parseSenderName()
). The implementation of these methods will depend greatly on the format of the message - your specific implementations will be different from mine so I have not shown them here.
Next we write a copy of the message to the database using the DB
facade, and then we send two emails: The first is the acknowledgement message sent to the person who contacted us in the first place, and the second is the notification sent to the client. Here are those two methods:
/**
* Send an acknowledgement email to the person who contacted us
*
* @param $senderName
* @param $senderEmail
*/
private function acknowledge($senderName, $senderEmail)
{
if (filter_var($senderEmail, FILTER_VALIDATE_EMAIL)) {
Mail::send('emails.thankyou', [], function($message) use ($senderName, $senderEmail) {
$message->from('client@clientdomain.com', 'Client Name');
$message->subject('Thank you for contacting us');
$message->to($senderEmail, $senderName);
});
}
}
/**
* Send a notification to 'contact@example.com`
* @param $body
* @return
*/
private function notify($body, $senderName)
{
Mail::send('emails.notify', ['body' => nl2br($body)], function($message) use ($senderName) {
$message->from('client@clientdomain.com', 'Client Name');
$message->subject('A New Contact Request from ' . $senderName);
$message->to('client@clientdomain.com', 'Client Name');
});
return response()->json(['ok'], 200);
}
In the acknowledge()
method we are first making sure that the email address we were provided with is valid. If it is, we send an email address to that address with some boilerplate text, which comes from the /resources/views/emails/thankyou.blade.php
file.
In the second method, we are essentially forwarding the content of the message to the client directly. We pass the $body
content to a blade file called /resources/views/emails/notify.blade.php
, which looks like this:
<!DOCTYPE html>
<html lang="en-US">
<head>
<meta charset="utf-8">
</head>
<body>
{!! $body !!}
</body>
</html>
Note the use of nl2br()
to keep the basic formatting in place. We read the text version of the message, which uses newline characters that will be lost when we show the text in html. Using nl2br()
to convert the newlines to "<br />" tags maintains the basic message format that we are expecting.
That is really all there is to it. Deploy the code to the client's server and you should be good to go.
]]>Two important things before we begin. First have to remove this line from our boot()
method, because it is no longer needed:
$this->package('rydurham/sentinel');
Secondly, Laravel 5 now ships with a vendor:publish
command, which greatly simplifies copying files from the vendor/package directory into the main application. Any folders or files added to the $this->publishes
array in the boot()
method will be published when you run the vendor:publish
command. (Vendor in this case is literally 'vendor', not the vendor name of your package.) As per the documentation, add this to your boot()
command:
public function boot()
{
// ...
$this->publishes([
__DIR__.'/path/to/file1.php' => base_path('location/in/main/application/file1.php'),
__DIR__.'/path/to/directory' => base_path('location/in/main/application/directory'),
// Add as items as you want, pointing to any location.
// Works with both files and directories
]);
// ...
}
Configuration management in Laravel 5 has been greatly simplified, and there is no longer a need for accessing config options via a namespace. Add your package config file to the $this->publishes
array and publish it to the main application config folder: base_path('config/sentinel.php')
. You can name it whatever you want, but I would reccomend giving it the same name as your package.
Accessing config values can be done in the same way you are used to:
Config::get('sentinel.allow_usernames')
or you can use this new helper function:
config('sentinel.allow_usernames')
It is possible that someone using your package may not publish the config file, or they only have a subset of the configurable values in their local version of the config file. In those situations it can be beneficial to selectively merge their config file with the default config file in the package repository. This can be done with the mergeConfigFrom
method in the boot
function:
public function boot()
{
// ...
$this->mergeConfigFrom(__DIR__.'/../config/sentinel.php', 'sentinel');
// ...
}
Package views can still be accessed via a namespace. In lieu of the old view:publish
command, you should add your views to the publishes
array. The loadViewsFrom
function allows you to register the view namespace, however you may want to check to see if the views have been published before you create the namespace:
public function boot()
{
// ...
// Establish Views Namespace
if (is_dir(base_path() . '/resources/views/packages/rydurham/sentinel')) {
// The package views have been published - use those views.
$this->loadViewsFrom(base_path() . '/resources/views/packages/rydurham/sentinel', 'Sentinel');
} else {
// The package views have not been published. Use the defaults.
$this->loadViewsFrom(__DIR___ . '/../views/bootstrap', 'sentinel');
}
// ...
}
Assets like javascript files or images should be added to the publishes
array and pointed to the public folder.
Making translation files available to the application is very straightforward, and translation strings can still be accessed with the trans()
function using a namespace:
public function boot()
{
// ...
// Establish Translator Namespace
$this->loadTranslationsFrom(__DIR__ . '/../lang', 'Sentinel');
// ...
}
Some packages might make use of their own controllers and routing. To add routes, just include the routes.php file as such:
public function boot()
{
// ...
// Add Sentinel Routes
include __DIR__ . '/../routes.php';
// ...
}
Controllers should be namespaced, and autoloaded via the composer.json file:
"autoload": {
"classmap": [
"src/seeds",
"src/controllers"
],
"psr-4": {
"Sentinel\\": "src/Sentinel"
}
},
A few years ago I was working for a startup in the healthcare industry - their idea was to provide an easy way for customers to receive updated about their loved ones in extended care through summaries prepared in laymen's english by trained nurses. Clients would place an order and the company would acquire copies of the medical records in question and have staff nurses write summaries that would then be delivered to the client through the website.
The meat of the application was tracking record requests as they progressed through the various stages of preparation - aquisition from the hospitals, summarized by nurses, reviewed by supervisors and then finally released to clients for consumption. As you can imagine, each phase of a record request required different actions be available to the record object. Not only that, but the types of actions available on a record also depended on the role of the user - admins had to do certain things with records, nurses certain other things, and of course the customer also had a different set of actions they could take against a records. Knowing what stage a particular record request was in became very important, as you can imagine.
Initially we decided to tackle this by using a 'status' column on the record table. "New" for new records, "acquired" for records that had been delivered from medical institutions, 'summarized' for records that had been addressed by the staff nurses and 'approved' for records that had been approved for delivery to the customer. (It was a bit more complicated than that, but you get the idea.) This worked well for the most part, however anytime we needed to present options available for the record we wound up with a lot of code like this:
<?php foreach ($records as $record) ?>
<tr>
<td><?php echo $record->referenceId; ?></td>
<td>
<?php if ($record->status == 'new') ?>
<a href="...">Cancel</a>
<a href="...">Assign to Nurse</a>
<?php endif; ?>
<?php if ($record->status == 'acquired') ?>
<a href="...">Prepare Summary</a>
<a href="...">View PDFs</a>
<?php endif; ?>
<?php if ($record->status == 'summarized') ?>
<a href="...">Review Summary</a>
<a href="...">Approve</a>
<a href="...">Reject</a>
<?php endif; ?>
<?php if ($record->status == 'approved') ?>
<a href="...">View</a>
<a href="...">Process Payment</a>
<?php endif; ?>
</td>
</tr>
<?php endforeach; ?>
This is a simplification, but the basic idea remains the same: A lot of time and energy is being spent checking the status of the record, and this doesn't even account for user access control. More often than not we had situations where we were doing things like if ($record->status == 'acquired' && $currentUser->inGroup('nurses'))
- it worked, but not that well and it was a bit of a mess. As it happens, this is a perfect scenario for the use of STI. What if, instead of checking for each possible type of action on a given record, we could instead ask the record itself what actions are available to it?
To implement STI, one column on the table is designated as the "discriminator" column. This column will be used to determine what type of entity should be returned for each row on the table. In our example, the 'status' column is the discriminator column. Properly configured, your ORM should return the appropriate type of object based on the value of that column. We can then set up child classes that have different methods available to them depending on what actions we want to make available to that type of object.
STI is ideal for situations where you are storing objects that are of very similar types - ideally sharing the need for all the other columns in that table. If you are working with STI and finding that you are adding columns just for the sake of certain types of entities, that is a red flag - there is probably a better way to organize your data. In our case, we are looking to return different types of record objects. Imagine however, that you have a table called "vehicles" and you are storing bicycles, cars and airplanes within it. It is likely that you would want a column that stored the maximum lifting weight allowed for each airplane, or the number of crew members needed to operate it - those columns would be of no use to the Bicyle objects. That is a situation where STI probably won't be very helpful.
Eloquent is an implementation of the Active Record pattern, which places heavy emphasis on using one table for each type of resource you are working with. There is no support for STI out of the box, however we can implement it on our own by overwriting a few key methods in the Illuminate\Database\Eloquent\Model
class.
One of the most important methods in an Eloquent Model object is the newFromBuilder
function. This method is responsible for instantiating the object in question and using the data from the database to populate the object's member data. This function us called any time you make an eloquent call that returns either a single object or a collection of objects. Modifying this function will be our first step - here is our new version:
/**
* Create a new model instance requested by the builder.
*
* @param array $attributes
* @return \Illuminate\Database\Eloquent\Model
*/
public function newFromBuilder($attributes = array())
{
// Create a new instance of the Entity Type Class
$m = $this->mapData((array)$attributes)->newInstance(array(), true);
// Hydrate the new instance with the table data
$m->setRawAttributes((array)$attributes, true);
// Return the assembled object
return $m;
}
Instead of creating a new instance of the model object (via $instance = $this->newInstance(array(), true);
) we are instead passing the attributes through to a new function called mapData
which is responsible for resolving the STI entity type and instantiating it, as such:
/**
* Use the inheritance map to determine the appropriate object type for a given Eloquent object
*
* @param array $attributes
* @return mixed
*/
public function mapData(array $attributes)
{
// Determine the type of entity specified by the discriminator column
$entityType = isset($attributes[$this->discriminatorColumn]) ? $attributes[$this->discriminatorColumn] : null;
// Throw an exception if this entity type is not in the inheritance map
if (!array_key_exists($entityType, $this->inheritanceMap)) {
throw new ModelNotFoundException($this->inheritanceMap[$entityType]);
}
// Get the appropriate class name from the inheritance map
$class = $this->inheritanceMap[$entityType];
// Return a new instance of the specified class
return new $class;
}
Lets step through this function line by line: First we check to make sure that a $this->discriminatorColumn
value exists. If it does, we find the value of that column in the attributes array containing the row data from the database. This is our $entityType
.
We are using an $inheritanceMap
as an array that maps discriminator values to entity classes. We now need to make sure that the entity class defined by the discriminator column actually exists - if it doesn't then we throw an exception.
After that, we grab the class name for this $entityType
from the $inheritanceMap
array and return an new instance of that class back to the newFromBuilder
function, which then populates the entity with the data from the database.
We now need to configure our Eloquent Model object appropriately:
class Widget extends Illuminate\Database\Eloquent\Model
{
// Eloquent Configuration
protected $guarded = ['id'];
protected $fillable = ['name', 'description', 'status'];
// Single Table Inheritance Configuration
use SingleTableInheritanceTrait;
protected $table = 'widgets';
protected $morphClass = 'Epiphyte\Widget';
protected $discriminatorColumn = 'status';
protected $inheritanceMap = [
'new' => 'Epiphyte\Entities\Widgets\NewWidget',
'processed' => 'Epiphyte\Entities\Widgets\ProcessedWidget'
'complete' => 'Epiphyte\Entities\Widgets\CompleteWidget'
];
// ...
}
This is where we select the discrimination column and set up the inheritance map. Our intention is to have the child entity classes inherit from this Eloquent model. If we don't explictly set the $table
and $morphClass
here, the relationships we establish on the base model will not work properly with the child entities.
There are some additional modifications that need to be made to allow Eloquent to play nicely with the new object types - if you want more information about those techniques I recommend you read this article by Pallav Kaushish - he does a great job of explaining the mechanics of the additional changes needed.
If you want to jump in the deep-end and get started with STI in Eloquent right away, I have a package that provides a trait with everything you need. Add the trait to your Eloquent Model objects, along with the additional configuration settings, and you are good to go. More information about that package can be found here.
Those of you are used to working with Data Mappers, such as Doctrine, will find that Single Table Inheritance is not much of a mental leap from what you are used to. The Doctrine documentation for Table Inheritance has a lot of great information - it is very easy to get up and running with STI in very little time. It can be as simple as doing something like this, which I have taken directly from the docs:
/**
* @Entity
* @InheritanceType("SINGLE_TABLE")
* @DiscriminatorColumn(name="discr", type="string")
* @DiscriminatorMap({"person" = "Person", "employee" = "Employee"})
*/
class Person
{
// ...
}
/**
* @Entity
*/
class Employee extends Person
{
// ...
}
We can now create different classes that represent each possible status type for our medical record requests. Now, instead of checking different status levels within our views do display action links, we can instead ask our entity objects to tell us what actions are available to them. Imagine we were to add something like this to our base model:
public function getActions($userLevel)
{
return $this->actions[$userLevel];
}
and then in our child entities we were to add member arrays that specify the allowable actions for an object with that status?
class RecordNew extends Epiphyte\Record
{
// ..
protected $actions = [
'client' => [
[
'name' => 'edit',
'action' => 'ClientReportController@edit',
'class' => 'default'
],
[
'name' => 'delete',
'action' => 'ClientReportController@delete',
'class' => 'danger'
]
]
];
// ..
}
We can set up some convenience functions behind the scenes that convert the action array entries into html button links, and then in our views we can convert our old messy code to something like this:
<?php foreach($records as $record) ?>
<tr>
<td>
<?php echo $record->referenceNum; ?>
</td>
<td>
<?php echo $record->getActions('client'); ?>
</td>
</tr>
<?php endforeach; ?>
Much more simple, don't you think?
Here are two excellent primers on the changes to be found in Homestead 2.0:
The primary reason I enjoyed using Homestead/Vagrant in the past was that I could keep my host machine free of the clutter that comes with developing websites, and I didn't have to worry about installing software in Windows that has primarily been designed to run on linux - what a headache!
However, Homestead 2.0 requires Composer to be running on your host machine before you can use it. To make this upgrade I had to install PHP and Composer locally, which turned out to be slightly more tricky than expected.
Conveniently it is easy to use PHP on Windows without installing a complete web stack:
C:\php
.php.ini-development
file to php.ini
. There are some recommended edits you should make, but the most important thing to do is enable the OpenSSL extension.php -v
and see something like this:PHP 5.6.3 (cli) (built: Nov 12 2014 17:18:08)
Copyright (c) 1997-2014 The PHP Group
Zend Engine v2.6.0, Copyright (c) 1998-2014 Zend Technologies
Composer has a Windows Installer which is very easy to use. It will ask you to provide the location of your PHP installation directory. After that it will scan your PHP setup for problems and then install the Composer.phar file. If your OpenSLL extension is not enabled you will see an error message. When complete, you should be able to run composer --version
and see something like the following:
Composer version 1.0-dev (ffffab37a294f3383c812d0329623f0a4ba45387) 2014-11-05 06:04:18
It is important to note that Composer will be installed in your AppData folder: C:\users\{user-name}\AppData\Roaming\Composer
At this point you should add Composer's vendor\bin
folder to your path as well - this will be necessary to work with Homestead later.
C:\users\{user-name}\AppData\Roaming\Composer\vendor\bin
You can now follow the instructions provided in the Homestead documentation for installation. If you haven't already, you need to download the Homestead Vagrant box:
> vagrant box add laravel/homestead
After that, run
> composer global require "laravel/homestead=~2.0"
This will copy the Homestead repo into your global Composer vendor folder.
Now run homestead init
. This will create a .homestead
folder in your user directory:
C:\users\{user-name}\.homestead
This is where you will find your homestead.yaml
file, which you can configure as needed. Make sure you add your SSH Key - this is critically important. When everything is ready you can run homestead up
to boot and provision Homestead.
Out of the box, Homestead has several commands that you will use day-to-day. These are just wrappers around Vagrant commands that you may already be familiar with. The most important ones are:
When you are starting out your day you will need to boot the box by using homestead up
. Then you can access the machine by using homestead ssh
.
Currently there is no homestead wrapper for the "--provision" option, so if you add a new site to your yaml file and want to re-provision Homestead you will need to run vagrant up --provision
from within the homestead installation folder.
C:\users\{user-name}\AppData\Roaming\Composer\vendor\laravel\homestead
There you can also find the Vagrantfile
if you want to make changes to how the box is provisioned.
As of this writing, I have found that using homestead ssh
to log into the homestead machine creates a SSH session which freezes up on me after a few minutes of use. For now I am just using the regular vagrant ssh
command, which does not have the same problem.
Happy homesteading!
]]>I ran into this issue when working on my Laravel/Sentry 2 bridge package Sentinel. (It is not related to Cartalyst's Sentinel Package.) This package uses a Validation system inspired by the book Implementing Laravel by Chris Fidao, and in several locations makes use of custom error messages.
When I switched to using language strings instead of strait text I wasn't quite sure how to pull the custom error messages from the language files and pass them to the Validator. Here is what I landed on:
First we need to make sure that the Package's namespace is passed to the Translator. Add this line to the Service Provider's boot function:
// Package Service Provider
class SentinelServiceProvider extends ServiceProvider
{
// ...
public function boot()
{
// ...
// Add the Translator Namespace
$this->app['translator']->addNamespace('Sentinel', __DIR__.'/../lang');
}
}
The addNamespace
function informs Laravel's translator class of the location of our language files and allows for the use of the 'Sentinel' namespace when translating language strings. We can now reference the Sentinel Package language strings via the Illuminate\Translation\Translator
class:
echo Lang::get('Sentinel::users.noaccess')
// Result:
// You are not allowed to do that.
We can also use trans()
, which is a helper function that aliases Lang::get()
.
echo trans('Sentinel::users.noaccess')
// Result:
// You are not allowed to do that.
Now we need to feed our custom error message strings to the Validator. This may be different for you, depending on how you handle validation, but the basic idea should remain the same.
<?php
namespace Sentinel\Service\Validation;
use Illuminate\Validation\Factory;
abstract class AbstractLaravelValidator implements ValidableInterface
{
/**
* Validator
*
* @var \Illuminate\Validation\Factory
*/
protected $validator;
/**
* Custom Validation Messages
*
* @var Array
*/
protected $messages = array();
public function __construct(Factory $validator)
{
$this->validator = $validator;
// Retrieve Custom Validation Messages & Pass them to the validator.
$this->messages = array_dot(trans('Sentinel::validation.custom'));
}
// ...
}
Here we are establishing a $messages
class member and loading our custom language strings when the abstract validator class is instantiated. The language files use Sentinel::validaton.custom
to refer to the array of custom error message strings.
Now all that remains is to pass the messages to the validator when we are attempting to validate data:
abstract class AbstractLaravelValidator implements ValidableInterface
{
// ...
/**
* Validation passes or fails
*
* @return boolean
*/
public function passes()
{
$validator = $this->validator->make($this->data, $this->rules, $this->messages);
if ($validator->fails() )
{
$this->errors = $validator->messages();
return false;
}
return true;
}
}
The validator takes three arguments:
The messages array uses the input and names as array keys, and the corresponding value is the desired message text.
Problem solved!
]]>