Links recognition library with FULL unicode support. Focused on high quality link patterns detection in plain text.
Why it's awesome:
- Full unicode support, with astral characters!
- International domains support.
- Allows rules extension & custom normalizers.
npm install linkify-it --save
Browserification is also supported.
var linkify = require('linkify-it')();
// Reload full tlds list & add uniffocial `.onion` domain.
linkify.tlds(require('tlds')).tlds('.onion', true);
// Add `git:` ptotocol as "alias"
linkify.add('git:', 'http:');
// Add twitter mentions handler
linkify.add('@', {
validate: function (text, pos, self) {
var tail = text.slice(pos);
if (!self.re.twitter) {
self.re.twitter = new RegExp(
'^([a-zA-Z0-9_]){1,15}(?!_)(?=$|' + self.re.src_ZPCcCf + ')'
);
}
if (self.re.twitter.test(tail)) {
if (pos >= 2 && tail[pos - 2] === '@') {
return false;
}
return tail.match(self.re.twitter)[0].length;
}
return 0;
},
normalize: function (m) {
m.url = 'https://twitter.com/' + m.url.replace(/^@/, '');
}
});
// Do the job:
linkify.test("Search at google.com.");
linkify.match("Search at google.com.");
Creates new linkifier instance with optional additional schemas.
Can be called without new
keyword for convenience.
By default understands:
-
http(s)://...
,ftp://...
,mailto:...
&//...
links - "fuzzy" links and emails (google.com, foo@bar.com).
schemas
is an object, where each key/value describes protocol/rule:
-
key - link prefix (usually, protocol name with
:
at the end,skype:
for example).linkify-it
makes shure that prefix is not preceeded with alphanumeric char. -
value - rule to check tail after link prefix
- String - just alias to existing rule
-
Object
-
validate - validator function (should return matched length on success),
or
RegExp
. - normalize - optional function to normalize text & url of matched result (for example, for twitter mentions).
-
validate - validator function (should return matched length on success),
or
Searches linkifiable pattern and returns true
on success or false
on fail.
Similar to .test()
but checks only specific protocol tail exactly at given
position. Returns length of found pattern (0 on fail).
Returns Array
of found link matches or null if nothing found.
Each match has:
-
schema - link schema, can be empty for fuzzy links, or
//
for protocol-neutral links. - index - offset of matched text
- lastIndex - index of next char after mathch end
- raw - matched text
- text - normalized text
- url - link, generated from matched text
Load (or merge) new tlds list. Those are user for fuzzy links (without prefix) to avoid false positives. By default this algorythm used:
- hostname with any 2-letter root zones are ok.
- biz|com|edu|gov|net|org|pro|web|xxx|aero|asia|coop|info|museum|name|shop|рф are ok.
- encoded (
xn--...
) root zones are ok.
If list is replaced, then exact match for 2-chars root zones will be checked.
Add new rule with schema
prefix. For definition details see constructor
description. To disable existing rule use .add(name, null)