Fixes subnets representations using slash notation

The ipv4 representation was only accepting slash notation with masks
represented in 2 digits. In the ipv6 implementation several fixies were made:
The maximum value to a bitmask was 64 which is not the reality, as ipv6 can
handle 128 bits. The second change was also to enable mask representation with
more and less than 2 digits. A more general fix was added to allow the unit
tests to work even if a invalid ip/range was informed during the creation of
the "tree", now it is checking if the tree is NULL while performing the
execution of the operator. Initial problem was reported at the issue: #706.
This commit is contained in:
Felipe Zimmerle
2014-06-10 16:09:05 -07:00
parent 731466cff0
commit 5d92e448ae
3 changed files with 36 additions and 13 deletions

View File

@@ -364,12 +364,12 @@ unsigned char is_netmask_v6(char *ip_strv6) {
if ((mask_str = strchr(ip_strv6, '/'))) {
*(mask_str++) = '\0';
if (strchr(mask_str, '.') != NULL) {
if (strchr(mask_str, ':') != NULL) {
return 0;
}
cidr = atoi(mask_str);
if ((cidr < 0) || (cidr > 64)) {
if ((cidr < 0) || (cidr > 128)) {
return 0;
}
netmask_v6 = (unsigned char)cidr;