Script Error 'Invalid Pointer' in Spatial Analyst Tool


I have ArcGis 10.3 (previously 10.2 version) running on Windows 10 (previously Windows 8). My GIS licence is provide by my company where I have an intership, so I can’t just uninstall/install back the logiciel.

So, I have this script error which appears often when I want to use Spatial Analyst Tool or Geoprocessing Tool:

Line 51, Char 4, Error Invalid Pointer, Code 0, URL
file:///C:/Users/username/AppData/Roaming/ESRI/Desktop10.3/ArcToolbox/Dlg/MdDlgContent.htm

I look at tons of articles online and did all their proposed solutions (put Internet Explorer as default browser, enable Active X control in security, clear roaming and local history of ArcTool Box in my AppData, upload a ‘new patch version’ of MdDlgContent.htm in my ArcGIS file provide by technical support from ESRI) and NOTHING IS WORKING.

Are QGIS Intersect and Clip Tools Broken?


I was getting this problem with a complex map so I have created a simplified map with two layers.

Accessible greenspace contains a couple of hundred small, mostly isolated, polygons with one (name) data field.

Black Country 32 Towns has 32 larger contiguous polygons with three data fields.

They have the same CRS and physically all of the greenspace layer overlaps the towns layer.

I want to create a new layer the as as accessible greenspace, but with the appropriate town data appended to the polygons. Then I can use groupstats to find out how much accessible greenspace is in each town.

Intersect creates a new layer with all four data fields as I would expect, but it is empty. It should contain all the accessible greenspace polygons, with any overlapping different towns split into two.

Clips doesn’t work either.

a test with buffer works, so the whole geoprocessing module isn’t broken.

This sort of operation worked two weeks ago, but doesn’t now.

Lots of questions here report a similar failure to create an output layer, but there seem to be not definitive answers, just requests for more detail or the suggestion to use GRASS (I find GRASS impenetrable).

My question is “Are the QGIS Intersect and Clip Tools Broken?” and is this an intermittent/random issue?

Error adding row – sequence size must match size of row


I have this table with one row

timestamp_pretty   3.1.2014 9:13
timestamp          1,38874E+12
msgid              3
targetType         A
mmsi               205533000
lat                53.4346
long               14.580546
posacc             0
sog                0
cog                319.5
shipType           CARGO
dimBow             68
draught            4
dimPort            6
dimStarboard       5
dimStern           12
month              1
week               1
imo                8404446
country            Belgium
name               FAST JULIA

I want to make a point feature class from arcpy using insert cursor:

# Read the csv
csv.register_dialect("xls", lineterminator="n")
f = open(incsv, "r")
reader = csv.reader(f, dialect = "xls")

# Add fields to fc
desc = arcpy.Describe(incsv)
for field in desc.fields:
    arcpy.AddField_management(outfc,field.name,field.type)

# the fieldnames
fields = desc.fields
fieldnames = [field.name for field in fields]

 # Create InsertCursor.
cursor = arcpy.da.InsertCursor(outfc, ['SHAPE@XY'] + fieldnames)
count = 0
next(reader, None) # skip the header
for row in reader:
    if count % 10000 == 0:
        print "processing row {0}".format(count) + " of " + table
    Ycoord = row[5]
    Xcoord = row[6]
    newrow = [(float(Xcoord), float(Ycoord))] + row[0:]
    cursor.insertRow([newrow])
    count += 1
del cursor
f.close()

But I get this error:

line 130, in <module>
    cursor.insertRow([newrow])
TypeError: sequence size must match size of the row

I’ve been through SE similar answers and made many tests (days) but to no avail.

****EDIT****

If I print the result of newrow and row[0:] like this:

newrow = [(float(Xcoord), float(Ycoord))] + row[0:]
print "new row: "+str(newrow)
print "row[0:]: "+str(row[0:])

*EDIT 2 *
name and type use for create feature class

[u'timestamp_pretty', u'timestamp', u'msgid', u'targetType', u'mmsi', u'lat', u'long', u'lat_D', u'long_D', u'posacc', u'sog', u'cog', u'shipType', u'dimBow', u'draught', u'dimPort', u'dimStarboard', u'dimStern', u'month', u'week', u'imo', u'country', u'name']
[u'Date', u'Double', u'Integer', u'String', u'Integer', u'String', u'String', u'Double', u'Double', u'Integer', u'String', u'String', u'String', u'Integer', u'String', u'Integer', u'Integer', u'Integer', u'Integer', u'Integer', u'Integer', u'String', u'String']

I get this result:

new row: [(14.580546, 53.4346), '03/01/2014 09:13:26', '1388740406000', '3', 'A', '205533000', '53.4346', '14.580546', '0', '0', '319.5', 'CARGO', '68', '4', '6', '5', '12', '01', '01', '8404446', 'Belgium', 'FAST JULIA']
row[0:]: ['03/01/2014 09:13:26', '1388740406000', '3', 'A', '205533000', '53.4346', '14.580546', '0', '0', '319.5', 'CARGO', '68', '4', '6', '5', '12', '01', '01', '8404446', 'Belgium', 'FAST JULIA']

I now, newrow has 22 fields (counting the coordinates in the beginning) and row[0:] has 21. Is that the error? If so why did it work in the original script I got from @John?

Forcing Python Toolbox tool to break loop and do cleanup when user clicks Cancel?


I am writing a Python tool that perform heavy tasks in a loop. When the user clicks “Close” I want to break the loop, perform some cleanup, and then finish.

Consider this example tool that counts down from ten:

import arcpy
import time

class Tool(object):

   def __init__(self):
      self.label = "Cancel Test"
      self.description = "Tests how the cancel function works."
      self.canRunInBackground = False

   def execute(self, parameters, messages):
      for i in range(10, -1, -1):
         arcpy.AddMessage(i)
         time.sleep(1)
      arcpy.AddMessage("We have lift-off!")
      return

When I click “Close” in the beginning of the execution it does not brake the loop or give me an opportunity to perform any clean up. Instead it loops to the very end and adds the message "We have lift-off!" before it outputs this:

Completed script CancelTest...
Cancelled function
(CancelTest) aborted by User.
Failed at [TIME] (Elapsed Time: 10,01 seconds)

I have found documentation for ArcGIS Pro (that uses Python 3.4) that describes how you can do this there using isCancelled:

import arcpy
import time

#Make sure it does not auto cancel.
arcpy.env.autoCancelling = False

class Tool(object):

   def __init__(self):
      self.label = "Cancel Test"
      self.description = "Tests how the cancel function works."
      self.canRunInBackground = False

   def execute(self, parameters, messages):
      for i in range(10, -1, -1):
         arcpy.AddMessage(i)
         time.sleep(1)
         #Check if the user clicked "Close"
         if arcpy.env.isCancelled:
            arcpy.AddMessage("Launch aborted!")
            return
      arcpy.AddMessage("We have lift-off!")
      return

However, when I run this from ArcCatalog 10.3 (i.e. not using ArcGIS Pro) it gives me the following error:

Traceback (most recent call last):
  File "H:Mina DokumentDGDPythonCancelTest.py", line 19, in execute
    if arcpy.env.isCancelled:
AttributeError: 'GPEnvironment' object has no attribute 'isCancelled'

Is there anyway to mimic the behavior that is available in ArcGIS Pro in an ordinary Python toolbox using Python 2.7?

Exporting feature class into multiple feature classes based on field values using ArcGIS for Desktop?


I have a feature class with over 2,000 features, and I need to make them all individual feature classes based on a field.

I know there has to be a way to do this, but I just can’t figure it out!

Any takers?

How to deal with large dataset when using Intersect?


I’m trying to do something similar to this post but the intersection tool operates indefinitely and doesn’t stop.

I want to create 1-mile buffers for all my study parcel centroids (about 5,000) and then wanted to intersect with the block group shp which contains the total population for each block group (about 21). What I want to get is the intersection of all the buffers with the block groups so I can get the area to calculate the population within each buffer – assuming the population is evenly distributed in each block group. But I don’t know why the Intersect tool takes so long and it just stopped at 7%. Are there any better tools that could replace the intersect?

  1. There are about 5,000 records for parcels and 21 for block groups.
  2. My RAM is 2 GB and I am using file geodatabase. The two shapefiles should not be more than 10mb.

How to leave out cities with less than X number of observation in ArcGIS?


I would like to generate some quintile maps based on the averaged sentiment score in each city in the US. However, not all the city has the same number of data point. How can I tell ArcGIS to show averaged sentiment score in each city that has at least, say 100, data points and leave the ones with less than 100 data points blank?

To be more specific, I have some csv data with latitude and longitude,

  1. I first import XY as csv into ArcGIS,
  2. then I export this csv into .shp file,
  3. each XY point contains a sentiment score, and ultimately I would like to get an averaged sentiment score of each city,
  4. I then spatially join the new .shp file containing my data with a base file of US city, and the aggregation method is to average the sentiment points into one averaged score per city,
  5. then I would like to generate quintile map based on the averaged sentiment score.

I just want the final quintile map leave those cities with less than 100 data points originally. I am not sure how and where I can specify when I generate the final quintile map to look into the original data in .shp and screen out the cities that don’t have at least 100 data points.

Edits (adding more details after visiting the ArcGIS doc):

Although I am getting more knowledgeable about how to select data via Query, it doesn’t address my issues: 1) I don’t see any reference from their document of how to query based on number of data points (they have given an example of using “population” variable > 1000, but it’s not the same); and 2) even if I can successfully select cities satisfied the threshold data points, I still need to join all areas with the base map and the final map will still see cities with less than 100 data points as having 0 averaged sentiment.

How to show 100K results of a complex query on map without using graphics


  • I need to show all 100K points on the map without using any sort of clustering.
  • Since they are too much they cannot be shown using a graphics layer.
  • since they are result of a very complex Oracle text query in cannot be applied in a dynamic layer.
  • The only way to show so much graphics on a may is via an image since the DOM can’t stand that much graphics.

So, to sum up, I want to:
(Apply a complex query and get 100K points) –> (pass the IDs or lat/lon in a service) —> get the generated image layer

Any suggested solutions ? (ORACLE 12g, ArcGis 10.2)

Expand raster nodata area (shrink values area)


I need to fix a raster that’s been inappropriately resampled along the nodata edge. The bad data area is narrow, pretty much a single cell. The value range is too variable to use a simple “erase less than 20″ kind of logic. How can I fix this?

I’m partial to a GDAL command line utilities or QGIS solution, but anything goes.

In the past I’ve used Gimp or Photoshop for this (magic wand select nodata with 0 tolerance, expand selection by 1 or 2px, delete, save, restore georeferencing), but I can’t do that here. The image is 16bit (a DEM) which photo tools don’t handle well (Gimp not at all) and at 4gb the image is too large to manage comfortably anyway.

In figure 1 black is the bad data to be removed, in figure 2 black is to be retained, but it’s acceptable to lose 1 or 2 pixels.

Update: figure 3 close up showing cell values. A de-collaring tool like nearblack doesn’t work because bad values are within valid value range, and often nowhere near black or white.

I’ve put samples from the dataset here:
http://files.environmentyukon.ca/matt/gis-stack/expand-raster-nodata-area/

Update 2: remove Gdal/Qgis focus.

Figure 1, black is bad
Figure 2, black is good
Figure 3, showing cell values

Reclassify a density surface a variable range of values


I have a set of points with a ‘weight’ value. I want to create a density surface from these points, classify it with 5 or so bins (actual number is arbitrary, but smaller than the default output which seems to be 9 bins), and then select the points that fall within the highest value areas. I want to do this as a tool users can run on demand from a web application as the source data will change frequently.

The methodology I’ve come up with is to create a geoprocessing service that will create an output service the web app can consume.

The geoprocessing tool I came up with

  1. Runs the kernel density tool to create the weighted density surface
  2. Runs the reclassify tool to classify the raster with the correct number of bins
  3. Runs the raster to polygon tool

The problem is the Reclassify tool, which is unfortunate because the raster to polygon tool requires an integer input value. Once it runs once it keeps the same breakpoints for the bins for every subsequent run. This is problematic because some of the actual values in a subsequent run could be out of range for what is specified in the tool. The relevance of the initial classification to subsequent data is also subject to change along with the data.

So… is there a way to force the Reclassify tool to pick a classification method (say, Natural Breaks) and force it to calculate breakpoints for 5 bins every time it runs? Am I going about this all wrong?

Question and Answer is proudly powered by WordPress.
Theme "The Fundamentals of Graphic Design" by Arjuna
Icons by FamFamFam