Driving an infrared LED from a coin cell


I am designing a remote-control type IR transmitter that will be powered by a coin cell battery (i.e. CR2032) and commanded by a microcontroller. The LED is pulsed at around 40kHz. How can I ensure that the LED current will be consistent as the battery discharges? Here are the problems I’ve encountered:

  1. The CR2032 datasheet from Energizer shows the battery voltage starting at 3V and decreasing to about 1.8-2V. (1.8V is also the minimum brown-out voltage of the micro I’m using, the attiny85v.) If the LED has a maximum forward voltage of 1.5V, that means if I used just a resistor the voltage drop would go from 1.5-0.3V over the lifetime of the battery. If I set the LED current at 15mA at the nominal 3V, the current will be just 3mA by the time the battery dies.
  2. Trying a more advanced circuit, such as the simple BJT current limiter in the diagram below, works fine down until around 2.2-2.4V. Rsense needs around 0.5V across it due to Q2‘s $V_{BE}$, and Q1‘s saturation voltage adds another couple tenths. This is too much overhead, I need to reduce it to around 0.3V somehow.
  3. Finally, I don’t want to use a switching regulator for two reasons: I want to keep the part count low, and I don’t want the switching frequency to interfere with the 40kHz modulation of the LED.

schematic

simulate this circuit – Schematic created using CircuitLab

Or, let me know if I’m overthinking this and a widely varying LED current is ok for a remote control.

Convolution in Matlab of transfer function


My problem is find the output to U(t+1)-U(t-1) with Matlab given the transfer function H(s). I know that I should be able to find the output to any input of an LTI system when given H(s), so I tried using convolution to find the output and plot it. Below is my attempt at using the conv to produce the output. Ht= dirac(t) – 4*exp(-6*t)

syms s t b
Hs=(s+2)/(s+6)
Ht=ilaplace(Hs)
tt=0:.1:60
Hnum= subs(Ht, 't',[eps:.1:10]);
%turn symbolic vector into actual
A=double(Hnum);
input=ones(1, 501);

output=conv(input,A)
figure(3)
plot(tt,output)  

I also tried to use the integral way

 %b is tau 
 t1=0;
 t2=int(exp(-(6*b)),b,0,t+1)+1
 t3=int(exp(-(6*b)),b,t-1,t+1)

I know that H(t)*x(t) = y(t). I was not able to get any reasonable output for the graphs. Any hints or help are much appreciated :) .

Cannot log in to SQL Server 2008 R2


I have problem with my SQL Server 2008 R2 instance. I want to log in using windows authentication and server name “local”, but error 2 is returned. I’ve tried to login with a server name of “IP address of my server” but again it failed. Does anybody know what should I do?

Error message:

A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 2)

How can one drop all default values and allow NULL for all columns of a table?


I’m working on a PHP script to modify the primary key (PK) of a table. My current code is:

$q = "SHOW KEYS FROM $db.$table WHERE Key_name='PRIMARY';";
$keys = Query ( $q);
if ( $keys)
{
    $q = "ALTER TABLE $db.$table
            DROP PRIMARY KEY,
            ADD PRIMARY KEY ($keyQ);";
    $res = Query ( $q);
}
else
{
    $q = "ALTER TABLE $db.$table
            ADD PRIMARY KEY ($keyQ);";
    $res = Query ( $q);
}

This mostly works for what I need. The only problem is that if I drop a field as a PK, it retains the NULL=’No’ and DEFAULT=0 properties from being a PK.

Since I’m going to define the PK later and I generally don’t need not null and default constraints I would like to clear all not null values and default values. Then I can proceed to define the primary key and know that the primary key is the only columns with default values and not NULL constraints.

Is there a way to do this with MySQL? I could do it with PHP, but it would probably be more work and would be less elegant.

Querying really slow Postgres


I have multiple tables on Postgres wit GIS and when I query

SELECT * FROM rxmsg
JOIN Vehicles ON rxmsg.vid=Vehicles.vid
JOIN stops on ST_DWithin(rxmsg.point, stops.point,1000)
JOIN drivers ON Vehicles.vid = drivers.vehicle_id 
WHERE rxmsg.rxdt BETWEEN drivers.date_activated AND drivers.date_deactivated;

the cost is enormous http://explain.depesz.com/s/8BE

My question is: Why is it so costly?

Sample data

RXMSG (estimated 60m rows)

Has indexes on date and on point (point is a geographic point made from lat and lon)

rxmsg

VEHICLES (63 rows)

vehicles

DRIVERS (2 rows)

drivers

STOPS (338 rows)

stops

Also, data and tables are in pastebin http://pastebin.com/Pj8vsL9R

Update all data in a table without affecting ID(PK)


I’m using SQL Server 2005.

I have a table with 4846 customer records. The PK is called ID and spans from record 40003 to 79870.
This numbers are used a other tables to hold info about user orders, for example.
Within the table there’s a decimal field that stores the amount of credit each user is entitled (CustomerCredit).

Now, the problem is I cant manage to come a up to a way of updating all of the CustomerCredit records without affecting the ID field… The only information that needs to change is the credit amount per each user.

Any clue will be more than appreciated.

GROUP BY Returning Different Totals [closed]


I have two very similar queries that are returning different totals. The first query, where the total [of subscription_payment.price] is calculated with PHP (and verified in Excel), is the same query that the second query is based off of.

First Query:

SELECT *,
       subscriptions_new.id AS subscription_id,
       plans_new.name AS plan_name,
       plans_new.guideid AS guideid,
       subscription_payment.date AS date,
       subscription_payment.renewal AS renewal,
       subscription_payment.price AS price,
       subscription_payment.price AS renewal_price
FROM transactions_new
JOIN accounts ON accounts.id = transactions_new.userid
JOIN subscriptions_new ON FIND_IN_SET(subscriptions_new.id, transactions_new.subscription_ids)
JOIN plans_pricing ON subscriptions_new.pricing_id = plans_pricing.id
JOIN subscription_payment ON subscription_payment.subscription_id = subscriptions_new.id
JOIN plans_new ON plans_new.id = plans_pricing.plan_id
WHERE
  subscription_payment.date >= 1417410000
  AND subscription_payment.date <= 1418187540
  AND subscription_payment.deleted != 1
GROUP BY subscriptions_new.id
ORDER BY plan_code DESC

Second Query:

SELECT SUM(subscription_payment.price) AS total,
       COUNT(*) AS qty,
       plan_code AS plan_code
FROM transactions_new
JOIN accounts ON accounts.id = transactions_new.userid
JOIN subscriptions_new ON FIND_IN_SET(subscriptions_new.id, transactions_new.subscription_ids)
JOIN plans_pricing ON subscriptions_new.pricing_id = plans_pricing.id
JOIN subscription_payment ON subscription_payment.subscription_id = subscriptions_new.id
JOIN plans_new ON plans_new.id = plans_pricing.plan_id
WHERE
  subscription_payment.date >= 1417410000
  AND subscription_payment.date <= 1418187540
  AND subscription_payment.deleted != 1
GROUP BY plan_code
ORDER BY plan_code DESC

In the second query,total is not matching what has been calculated from adding all of the records in the first query.

Any help is appreciated. Also note that the GROUP BY makes no difference when calculating the totals (so it’s okay for them to be different – that’s not causing the issue).

Find dates when exactly records changed for user


I need to find the dates when exactly the status really changed.

Order should be analysed from Start to end period:

Start Period : 11-10-2014
End Period : 21-10-2014

Data:

ID, Name, Effective_date, Status
1   A       21-10-2014      OFF
2   A       20-10-2014      OFF
3   A       19-10-2014      On
4   A       18-10-2014      On
5   A       17-10-2014      On
6   A       16-10-2014      OFF
7   A       15-10-2014      On
8   A       14-10-2014      On
9   A       13-10-2014      OFF
10  A       12-10-2014      OFF
11  A       11-10-2014      OFF

I am using SQL Server 2000.

Expected output:

ID, Name, Effective_date, Status

2   A       20-10-2014      OFF
5   A       17-10-2014      On
6   A       16-10-2014      OFF
8   A       14-10-2014      On
11  A       11-10-2014      OFF

Alter table statement waiting (for hours, on dev machine), but no locks shown


I have been trying to issue a simple:

ALTER TABLE tablename ADD COLUMN id_col character varying (30)

type statement on a Postgres 9.1.13 build on Debian. The application is still in private beta, so the volume is low, and yet something is blocking this statement. Following this Postgres lock monitoring post, and running the query,

SELECT bl.pid        AS blocked_pid,
     a.usename       AS blocked_user,
     kl.pid          AS blocking_pid,
     ka.usename      AS blocking_user,
     a.current_query AS blocked_statement
 FROM  pg_catalog.pg_locks  bl
 JOIN pg_catalog.pg_stat_activity a  ON a.procpid = bl.pid
 JOIN pg_catalog.pg_locks    kl ON kl.transactionid = bl.transactionid AND kl.pid != bl.pid
 JOIN pg_catalog.pg_stat_activity ka ON ka.procpid = kl.pid
 WHERE NOT bl.granted;

returns no results.

If I run,

SELECT * FROM pg_stat_activity

all I see is the ALTER TABLE statement with waiting = t, and a couple of other queries in an IDLE state.

I am not a DBA, more of a database developer, and so apologies if I am missing something really obvious, but I have never seen a situation like this on a low volume dev box, so am at a loss as to how to proceed.

MySQL: is there a way to make a self-referencing join recursive?


I need to insert some records derived from relationships in the same “pivot” table.

We have a products_to_products table which stores relationships between products.
Here’s a simplified example (also available in this fiddle here).

CREATE TABLE products_to_products
    (`left_id` varchar(4), `right_id` varchar(4) );
INSERT INTO products_to_products (`left_id`, `right_id`)
VALUES
    (1234, 'aaaa'),
    (1234, 'bbbb'),
    (1234, 'cccc'),
    (5678, 'dddd'),
    (5678, 'eeee'),
    (5678, 'ffff'),
    (1011, 'gggg'),
    (1011, 'aaaa'),
    (1011, 'dddd');

What I need to do is insert new rows which express the relationships found there.
For example, 1234 and 1011 are related on aaaa.
Likewise for 5678 and 1011 because they relate on dddd.

Easy enough to collect with something like:

SELECT DISTINCT 
  l.left_id AS left_id, 
  r.left_id AS right_id
FROM products_to_products AS l
JOIN products_to_products AS r
  ON l.right_id = r.right_id
WHERE l.left_id <> r.left_id

…but then, that doesn’t include the fact that 1234 and 5678 would now relate on 1011.

Is there a way to make that select recursive, or am I just going to have to run my queries over and over again to keep picking up the relationships created by the previous insert query?

NOTE: I’m better at reading SQL than thinking SQL, so please feel free to mention any other optimizations/improvements that might be worthwhile :)

Question and Answer is proudly powered by WordPress.
Theme "The Fundamentals of Graphic Design" by Arjuna
Icons by FamFamFam