Our website has thousands of incoming nofollow links, does this help our SEO or is it useless?


We have a huge amount of backlinks from good sources, but unfortunately a large chunk of them are nofollow.

I’m just hoping these aren’t completely useless for SEO.


Update:

I edited the question to make it clearer that we have thousands of incoming no-follow backlinks.

Cakephp 3 installation on Ubuntu doesn't have permission to access index.php


I have installed Cakephp on my ubuntu webserver (well copied it in var/www/html) following this tutorial http://askubuntu.com/questions/628938/how-to-install-cakephp-in-ubuntu-14-04

I am also using this Version (14.04 LTS) and did everything. But now I am getting this

You don't have permission to access /webroot/index.php on this server.

Which is strange. I set the permissions on 755 for this folder and also for index.php in webroot.

Is it OK that I submitted sitemaps for www and no-www?


I have submitted two sitemaps of the same website. One for www and one for no the www version of my website.

Should I delete one of the 2 sitemaps on the URL with redirect? In that case I would leave the sitemap to version no-www.

Is anything else I should do?

Linux Mint 17.3: Screen freezes all the time


The new Linux Mint .iso I installed on my computer yesterday. I updated the system and used dist-upgrade too. I installed Chromium-Browser; afterwards I tried to install texlive-full. And then: screen freeze. I restarted my computer, but I had the same problem afterwards. Screen freeze.

I googled it, and I could see that a lot of other users had the same problem with Linux Mint 17.3 Rosa with Cinammon. But I couldn’t find an answer to the trouble. I hope you can help!

PS! My system is encrypted. Could that be the problem?

Why would I tar a single file?


At my company, we download a local development database snapshot as a db.dump.tar.gz file. The compression makes sense, but the tarball only contains a single file (db.dump).

Is there any point to archiving a single file, or is .tar.gz just such a common idiom? Why not just .gz?

How to find the code segment of a Linux driver in tmpfs? (in real-time)


I have a character driver called drv1, and a user application uses ioctl to communicate with it, which will transmit the parameter struct file *filp. Now I want to find out the address of the code segment of drv1, but I came across some problems.

At first, I guess struct file *filp might be useful so I looked at the definition in source code, and find a pointer struct inode *f_inode; /* cached value */. Then I roughly searched the definition of struct inode (I’m not sure whether it is right as I’m not familiar when tmpfs); a pointer named struct address_space *i_mapping seems to be what I need. But I don’t know how to dig deeper and get stuck; there are some complicated data structures in the struct address_space, such as:

struct radix_tree_root  page_tree; /* radix tree of all pages */

and

struct rb_root  i_mmap;            /* tree of private and shared mappings */

Does it mean that the data of the driver drv1 is organized as the form of radix_tree_root? Or does it mean that I’ve missed something else?

Munin mysql plugin silently fails


I installed this Munin MySQL plugin on a RHEL 6 machine and I’m experiencing a strange behavior.

I’m not getting any data in the Munin web page (the Categories list does not show a mysql link), but I’m not receiving any error neither.
All other graphs (disk, processes, system, etc) work fine.

munin-run mysql and munin-run mysql config print absolutely no output and exit with zero status. Running munin-run with any other plugin works fine.

The Munin logfile shows no error:

2016/04/20-14:40:01 CONNECT TCP Peer: "[10.10.10.20]:33967" Local: "[10.10.10.15]:4949"
2016/04/20-14:45:02 CONNECT TCP Peer: "[10.10.10.20]:49531" Local: "[10.10.10.15]:4949"
2016/04/20-14:50:02 CONNECT TCP Peer: "[10.10.10.20]:59469" Local: "[10.10.10.15]:4949"

The mysql logs show no error neither.

What might be wrong with it?

I installed the same plugin a couple months ago on a CentOS 7 machine in pretty much the same way and it’s working perfectly.

Managing cron jobs across multiple servers


We are facing a problem with managing cron jobs over multiple servers with dependencies.

I hope there is an opensource central management project that I can be used to handle that and report the status of each job.

I found a project called chronos that runs on a top of mesos, but is there any alternative?

RAID1 vs rsync – security?


I’m setting up a NAS-Server using an Odroid XU4 and two 2TB HDDs.
Which setup would be more secure (lower risk of losing data / easier recovery):

  1. setup a RAID1 with mdadm

  2. have two separate devices, sync devices using rsync periodically

I know if one drive crashed in 2. I’d lose the data created/modified since last sync, but when using a RAID it would be a bit more “difficult” to get the data from the “still working” drive.

mount — how long has a partition been mounted?


Asides from viewing dmesg logs, does mount keep any records of when a partition has been mounted? perhaps viewing the ctime of the mount target? Does anyone have a definitive command for that?

Question and Answer is proudly powered by WordPress.
Theme "The Fundamentals of Graphic Design" by Arjuna
Icons by FamFamFam