calculating Canonical covers

Given F = {AB -> C, A -> BC, B-> A}, compute Fc

AB-> C is A extra?
{B->C, A-> BC, B->A}
1st possible Fc = {A->BC, B->AC}

AB-> C is B extra?

{A->C, A->BC, B->A)

2nd possible Fc = {A->BC, B->A)

Am I calculating the Fc correctly. I feel like I am doing something wrong?

How to maintain and import self-made libraries

Over the years I’ve written some libraries. These libraries sometimes depend on each other.

As a result, the “tex” directory is structured as follows:

├── library
|   ├── library1.sty
|   ├── library2.sty
|   └── library3.sty
├── project1
|   ├── master.tex
|   ├── chapter1.tex
|   └── chapter2.tex
├── project2
|   └── article.tex
└── project3
    └── paper.tex

The libraries depend on each other. As a result, in every library that depends on another library, this is part of the header:


In other words, the library is imported from the “root of every project”. This is not a good design choice. What if someone copies the libraries in the folder of the root of a project, or what if the folder “library” is renamed…

It is however reasonable the “active directory” still remains in the project root.

What can be done to resolve such library dependencies?

Coping with build order requirements in automated builds

I have three Scala packages being built as separate sbt projects in separate repos with a dependency graph like this:

  ^     ^
  |     |

S is a service. M is a set of message classes shared between S and another service. D is a DAL used by S and the other service, and some of its model appears in the shared messages.

If I make a breaking change to all three, and push them up to my Git repo, a build of S will be kicked off in Jenkins. The build will only be successful if, when S is pushed, M and D have already been pushed. Otherwise, Jenkins will find it doesn’t have the right dependent package versions available. Even pushing them simultaneously wouldn’t be enough — the dependencies would have to be built and published before the dependent job was even started. Making the jobs dependent in Jenkins isn’t enough, because that would just cause the previous version to be built, resulting in an artifact that doesn’t have the needed version.

Is there a way to set things up so that I don’t have to remember to push things in the right order?

The only way I can see it working is if there was a way that a build could go into a pending state if its dependencies weren’t available yet.

I feel like there’s a simple solution I’m missing. Surely people deal with this a lot?

Portability of an executable to another Linux machine

I’ve installed the program Motion on one Linux machine (M1) and want the same program on another (M2).

There are various builds of this program, and I have forgotten which one I have used, so can I do a straight copy of the user/bin/motion file from M1 and place it in the user/bin/motion of M2?

I know where the configuration file is, so I’ll move that across, but I’m not sure on what video drivers the working version of motion uses on M2; is there any way of finding out?

Is there a way that I can find out its dependencies?

Are the required parameters of a function called dependencies?

I’m studying dependency injection and I want to know if required function parameters can be considered dependencies.

I’d just like to make sure before I go around referring to them dependencies and that turns out to not be exactly accurate.

function doSomething(required){

    if(required !== null){

        // do stuff



Are the required parameters of a function called dependencies?

Optimizing bulk update performance in Postgresql with dependencies

Basically my question is the same as this one, but WITH dependencies, so drop/renaming the table is not a trivial option (I assume).

We are refactoring a large, poorly designed table which has many columns and references to it. It currently has a text field that should be a foreign key. The naive update looks like:

UPDATE myTable SET new_id=(SELECT id FROM list WHERE name=old_text);

The above takes practically forever because the table is large, and basically gets temporarily doubled due to UPDATE being equivalent to INSERT/DELETE.

We do not need everything done in one transaction. So we are considering some sort of external script to do the updates in batches of 5000 or so, but tests indicate it will still be painful/slow.

Suggestions on how to improve performance?

Why does mysql server packages have perl dependencies in linux distros?

I’m trying to clean out some unneeded packages from one of my gentoo boxes with emerge --depclean, and I thought I had a few perl modules installed that none of my wanted packages should require.

So, I was a bit surprised to see that:

dev-db/mysql-5.5.39 requires >=dev-perl/DBD-mysql-2.9004

Shouldn’t it be the other way around? Why on earth is mysql dependent on a perl package?

The official MySQL documentation only says that perl is required if running the test scripts when/after compiling from source.

I use the IUS releases of the LAMP (where P means PHP) stack on my CentOS boxes, and the mysql55-server-5.5.39-1.ius.el6.x86_64 package has for instance these requirements (obtained with rpm -qR):


Is there really a need for these requirements on the server packages?

Cross-compiling Slackware: is the build order listed anywhere?

I’m building a Slackware system from source and hitting a dependency wall here. (Before you ask: no, I’m not trying to “make it faster”; I’m building against a different C library.) Getting a toolchain and the very basics (coreutils, archivers, shell, perl, kernel, etc.) was simple enough, but when I look at the remaining several hundred packages I don’t know what order they need to be built in to meet their dependencies.

Looking through the various docs I don’t see any build order listed, and there’s no master build script either, just the individual packages’ SlackBuilds. And .tgz’s don’t list dependencies like deb’s or RPM’s do. Is this just something Patrick keeps in his head that the rest of us mortals will have to figure out manually, or am I missing a doc somewhere?

I tried using BLFS but Slackware seems to build X much earlier in the process than BLFS does. I suppose I can simply try to build everything, note when dependencies fail, and manually make a dependency tree, but I’m hoping there’s just a build list somewhere I’m missing…

Are there two type of associations between objects or are there just different representations?

I’ve been spending some time on ‘re-tuning’ some of my OOP understanding, and I’ve come up against a concept that is confusing me.

Lets say I have two objects. A user object and an account object. Back to basics here, but each object has state, behaviour and identity (often referred to as an entity object).

The user object manages behaviour purely associated with a user, for example we could have a login(credentials) method that returns if successfully logged in or throws exception if not.

The account object manages behaviour purely associated with a users account. For example we could have a method checkActive() that checks if the account is active. The account object checks if the account has an up-to-date subscription, checks if there are any admin flags added which would make it inactive. It returns if checks pass, or throws exception if not.

Now here lies my problem. There is clearly a relationship between user and account, but I feel that there are actually two TYPES of association to consider. One that is data driven (exists only in the data/state of the objects and the database) and one that is behaviour driven (represents an object call to methods of the associated object).

Data Driven Association

In the example I have presented, there is clearly a data association between user and account. In a database schema we could have the following table:


When we instantiate the account and load the database data into it, there will be a class variable containing user_id. In essence, the account object holds an integer representation of user through user_id

Behaviour Driven Association

Behaviour driven associations are really the dependencies of an object. If object A calls methods on object B there is an association going from A to B. A holds an object representation of B.

In my example case, neither the user object nor the account object depend on each other to perform their tasks i.e. neither object calls methods on the other object. There is therefore no behaviour driven association between the two and neither object holds an object reference to the other.


Is the case I presented purely a case of entity representation? The association between user and account is always present, but its being represented in different ways?

ie. the user entity has an identity that can be represented in different forms. It can be represented as an object (the instantiated user object) or as a unique integer from the users table in the databases.

Is this a formalised way of recognising different implementations of associations or have I completely lost my mind?

One thing that bugs me is how would I describe the differences in UML or similar? Or is it just an implementation detail?

How to include in the result of a `SELECT … GROUP BY …` all the other columns that are functionally dependent on the grouping ones?

I’ll base this question on a toy example.

Let this be table A:

 U | V | W | X | Y |  Z
 a | b | c | 1 | 6 | 8.3
 a | b | c | 1 | 4 | 3.7
 a | b | f | 3 | 4 | 2.6
 a | b | f | 3 | 2 | 6.0
 a | e | c | 1 | 0 | 3.5
 a | e | c | 1 | 5 | 8.8
 d | b | f | 1 | 0 | 3.5
 d | b | f | 1 | 3 | 2.3
 d | e | c | 2 | 6 | 2.2
 d | e | c | 2 | 4 | 3.3
 d | e | f | 0 | 7 | 5.0
 d | e | f | 0 | 6 | 3.6

I can produce a second table B by grouping the rows of A by columns U, V, and W, and computing the average of column Z for each group.

 U | V | W | Z_avg
 a | b | c |  6.0
 a | b | f |  4.3
 a | e | c |  6.2
 d | b | f |  2.9
 d | e | c |  2.7
 d | e | f |  4.3

The SQL for this would be something like


But I want the new table to include all the columns of the original table that have a functional dependence on the grouping columns U, V, and W. In this example there is one such column, namely column X.

In other words, I want to generate the table C shown below:

 U | V | W | X | Z_avg
 a | b | c | 1 |  6.0
 a | b | f | 3 |  4.3
 a | e | c | 1 |  6.2
 d | b | f | 1 |  2.9
 d | e | c | 2 |  2.7
 d | e | f | 0 |  4.3

So this problem has two parts, at least conceptually.

  1. How to determine which columns are functionally dependent on
    columns U, V, and W?

  2. What is the SQL to generate table C?

I know how to implement a (say, Python) script that can answer (1), but it is tedious and slow. (Basically, for each of the candidate columns, in this case X and Y, the script would collect all of its values for each distinct combination of values in columns U, V, and Z, and then, if each of these sets of values has exactly one element, then the column is functionally related to U, V, and Z.)

Likewise, once I have identfied the functionally dependent columns, I can muddle may way through (using temporary tables and what not) to eventually end up with something like table C above (thus, effectively solving (2)).

I figure, however, that this task is sufficiently common that there may be standard tools/techniques to carry it out.

Question and Answer is proudly powered by WordPress.
Theme "The Fundamentals of Graphic Design" by Arjuna
Icons by FamFamFam