Links recognition library with FULL unicode support. Focused on high quality link patterns detection in plain text.
Why it's awesome:
- Full unicode support, with astral characters!
- International domains support.
- Allows rules extension & custom normalizers.
npm install linkify-it --save
Browserification is also supported.
var linkify = require('linkify-it')();
// Reload full tlds list & add uniffocial `.onion` domain.
linkify
.tlds(require('tlds')) // Reload with full tlds list
.tlds('.onion', true) // Add uniffocial `.onion` domain
.linkify.add('git:', 'http:') // Add `git:` ptotocol as "alias"
.linkify.add('ftp:', null) // Disable `ftp:` ptotocol
.set({ fuzzyIP: true }); // Enable IPs in fuzzy links (withour schema)
console.log(linkify.test('Site github.com!')); // true
console.log(linkify.match('Site github.com!')); // [ {
// schema: "",
// index: 5,
// lastIndex: 15,
// raw: "github.com",
// text: "github.com",
// url: "http://github.com",
// } ]
linkify.add('@', {
validate: function (text, pos, self) {
var tail = text.slice(pos);
if (!self.re.twitter) {
self.re.twitter = new RegExp(
'^([a-zA-Z0-9_]){1,15}(?!_)(?=$|' + self.re.src_ZPCc + ')'
);
}
if (self.re.twitter.test(tail)) {
// Linkifier allows punctuation chars before prefix,
// but we additionally disable `@` ("@@mention" is invalid)
if (pos >= 2 && tail[pos - 2] === '@') {
return false;
}
return tail.match(self.re.twitter)[0].length;
}
return 0;
},
normalize: function (match) {
match.url = 'https://twitter.com/' + match.url.replace(/^@/, '');
}
});
Creates new linkifier instance with optional additional schemas.
Can be called without new
keyword for convenience.
By default understands:
-
http(s)://...
,ftp://...
,mailto:...
&//...
links - "fuzzy" links and emails (google.com, foo@bar.com).
schemas
is an object, where each key/value describes protocol/rule:
-
key - link prefix (usually, protocol name with
:
at the end,skype:
for example).linkify-it
makes shure that prefix is not preceeded with alphanumeric char. -
value - rule to check tail after link prefix
- String - just alias to existing rule
-
Object
-
validate - validator function (should return matched length on success),
or
RegExp
. - normalize - optional function to normalize text & url of matched result (for example, for twitter mentions).
-
validate - validator function (should return matched length on success),
or
options
:
-
fuzzyLink - recognige URL-s without
http(s):
prefix. Defaulttrue
. -
fuzzyIP - allow IPs in fuzzy links above. Can conflict with some texts
like version numbers. Default
false
. -
fuzzyEmail - recognize emails without
mailto:
prefix.
Searches linkifiable pattern and returns true
on success or false
on fail.
Quick check if link MAY BE can exist. Can be used to optimize more expensive
.test()
calls. Return false
if link can not be found, true
- if .test()
call needed to know exactly.
Similar to .test()
but checks only specific protocol tail exactly at given
position. Returns length of found pattern (0 on fail).
Returns Array
of found link matches or null if nothing found.
Each match has:
-
schema - link schema, can be empty for fuzzy links, or
//
for protocol-neutral links. - index - offset of matched text
- lastIndex - index of next char after mathch end
- raw - matched text
- text - normalized text
- url - link, generated from matched text
Load (or merge) new tlds list. Those are user for fuzzy links (without prefix) to avoid false positives. By default this algorythm used:
- hostname with any 2-letter root zones are ok.
- biz|com|edu|gov|net|org|pro|web|xxx|aero|asia|coop|info|museum|name|shop|рф are ok.
- encoded (
xn--...
) root zones are ok.
If list is replaced, then exact match for 2-chars root zones will be checked.
Add new rule with schema
prefix. For definition details see constructor
description. To disable existing rule use .add(name, null)
Override default options. Missed properties will not be changed.