questions
stringlengths
50
48.9k
answers
stringlengths
0
58.3k
auth_registrations_mapping error: could not create while pip install twilio on cmd with admin access Note: Most of the solutions to similar problem suggest that, I retry on CMD admin access. I have tried and still it wont install and returns similar error.SoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\twilio\rest\api\v2010\account\sip\domain\auth_types\auth_registrations_mapping error: could not create 'C:\Users\manoj\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\twilio\rest\api\v2010\account\sip\domain\auth_types\auth_registrations_mapping\auth_registrations_credential_list_mapping.py': No such file or directory ----------------------------------------ERROR: Command errored out with exit status 1: 'C:\Users\manoj\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\manoj\\AppData\\Local\\Temp\\pip-install-6lxfxplk\\twilio_26ffaa2a98754c52996da5165593b5ba\\setup.py'"'"'; __file__='"'"'C:\\Users\\manoj\\AppData\\Local\\Temp\\pip-install-6lxfxplk\\twilio_26ffaa2a98754c52996da5165593b5ba\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\manoj\AppData\Local\Temp\pip-record-ovyqxs5v\install-record.txt' --single-version-externally-managed --user --prefix= --compile --install-headers 'C:\Users\manoj\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\Include\twilio' Check the logs for full command output.
Twilio developer evangelist here.I believe you have hit the Windows path length limit of 260 characters, which is why this file is failing. You can enable longer paths by following the instructions in the Windows documentation here.Essentially, the Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem\LongPathsEnabled (Type: REG_DWORD) registry key must exist and be set to 1. Then, after a reboot you should be able to use longer paths and your installation should succeed.
How can i compare two DateTimeFields with current time Event has two DateTimeField's and i need to compare them to the actual date time in a same method.from django.utils import timezoneevent_starts = models.DateTimeField(_("Event starts"))registration_closes_at = models.DateTimeField( _("Registration ends:"), null=True, blank=True )This is what i have tried, but it doesn't work. So what i need is: If event has been started or registration was closed user can not attend this event. def is_registration_open(self): now = timezone.now() passed_registration = now > self.registration_closes_at passed_start = now > self.event_starts if not passed_registration or not passed_start: returnAnd tried this:def is_registration_open(self): if ( not timezone.now() > self.registration_closes_at or not timezone.now() > self.event_starts ): returnHere is a fail:'>' not supported between instances of 'datetime.datetime' and 'NoneType'When i compare only event_starts everything is working fine.Thanks for help!
Notice that registration_closes_at field is nullable, therefore you need to handle that case in the condition.My guess is that you would like to ignore the registration_closes_at when it is null, so the condition would look like:def is_registration_open(self): if self.registration_closes_at: # before registration closes return timezone.now() < self.registration_closes_at else: # before event starts return timezone.now() < self.event_startsEven better option is to use annotation and check the condition on the database for better performancefrom django.db.models import Count, Case, When, Value, BooleanFieldfrom django.db.models.functions import NowMyModel.objects.annotate( can_register=Case( When(registration_closes_at__gt=Now(), then=Value(True)), When(event_starts__gt=Now(), then=Value(True)), default=Value(False), output_field=BooleanField(), ))
How to output nothing Hi I wrote a program which tells me if there is or not hexadecimals in the input.hexadecimal = ['0','1','2','3','4','5','6','7','8','9','a','A','b','B','c','C','d','D','e','E','f','F']output = ''for c in hexadecimal: digit = input('Digit: ') output += c.join(digit) if digit == '': print(output, 'is a valid hexadecimal string.') break elif digit not in hexadecimal: print(digit, 'is not a valid hexadecimal digit.') breakThe complete code of this program works, the only problem is that I need to add this into my code when the user doesn't enter anything.
Use another if statement before appending it to output:if len(output) == 0 and len(digit) == 0: print("input is blank")This checks that the user hasn't previously entered anything and hasn't currently entered anything, if both are true then tell them the input is blank.
Django Login with Email or Phone Number I have been trying to find a solution of a problem in Django for a very long time. The problem is I am trying to develop a login system than can use either email or phone number to authenticate user.
Well, that can be done using by creating a custom user model . I have tested this. It works.First stepsThe first thing you need to do is create a new Django project. Make sure you don't run migrations because there are still a few things we need to do before then.After creating your new Django project, create a new app called accounts with the following command:python manage.py startapp accountsCreating the User ModelBy default, the User model provided by Django has a username field, and an email field. However, we also need a phone number field. In order to add this field, we need to extend the Django user model. In the accounts app's models.py file, type in the following code:models.pyphone_validator = RegexValidator(r"^(\+?\d{0,4})?\s?-?\s?(\(?\d{3}\)?)\s?-?\s?(\(?\d{3}\)?)\s?-?\s?(\(?\d{4}\)?)?$", "The phone number provided is invalid")class User(AbstractBaseUser, PermissionsMixin):email = models.EmailField(max_length=100, unique=True)phone_number = models.CharField(max_length=16, validators=[phone_validator], unique=True)full_name = models.CharField(max_length=30)is_active = models.BooleanField(default=True)is_admin = models.BooleanField(default=False)# is_translator = models.BooleanField(default=False)objects = CustomUserManager()USERNAME_FIELD = 'phone_number'REQUIRED_FIELDS = ['email', 'full_name']def __str__(self): return self.email@staticmethoddef has_perm(perm, obj=None, **kwargs): return True@staticmethoddef has_module_perms(app_label, **kwargs): return True@propertydef is_staff(self): return self.is_adminRegister the model with the adminadmin.pyclass UserAdmin(BaseUserAdmin):form = UserChangeFormadd_form = UserCreationFormlist_display = ('email', 'phone_number', 'full_name', 'is_active', 'is_admin')list_filter = ('is_active', 'is_admin')fieldsets = ( (None, {'fields': ('full_name', 'email', 'phone_number', 'password')}), ('Permissions', {'fields': ('is_active', 'is_admin', 'is_superuser', 'last_login', 'groups', 'user_permissions')}),)add_fieldsets = ( (None, {'fields': ('full_name', 'phone_number', 'email', 'password1', 'password2')}),)search_fields = ('email', 'full_name')ordering = ('email',)filter_horizontal = ('groups', 'user_permissions')def get_form(self, request, obj=None, **kwargs): form = super().get_form(request, obj, **kwargs) is_superuser = request.user.is_superuser if is_superuser: form.base_fields['is_superuser'].disabled = True return formadmin.site.register(User, UserAdmin)forms.pyclass UserLoginForm(forms.Form):email = forms.CharField(max_length=50)password = forms.CharField(widget=forms.PasswordInput(attrs={'class': 'form-control'}))for login costumer in login.htmlviews.pyimport randomfrom .backends import EmailPhoneUsernameAuthenticationBackend as EoPclass UserLoginView(View):form_class = UserLoginFormtemplate_name = 'accounts/login.html'def dispatch(self, request, *args, **kwargs): if request.user.is_authenticated: return redirect('core:home') return super().dispatch(request, *args, **kwargs)def get(self, request): form = self.form_class return render(request, self.template_name, {'form': form})def post(self, request): form = self.form_class(request.POST) if form.is_valid(): cd = form.cleaned_data user = EoP.authenticate(request, username=cd['email'], password=cd['password']) if user is not None: login(request, user) messages.success(request, 'You have successfully logged in!', 'success') return redirect('core:home') else: messages.error(request, 'Your email or password is incorrect!', 'danger') return render(request, self.template_name, {'form': form})Writing a Custom Backendbackends.pyfrom django.contrib.auth.hashers import check_passwordfrom django.contrib.auth import get_user_modelfrom django.db.models import QUser = get_user_model()class EmailPhoneUsernameAuthenticationBackend(object):@staticmethoddef authenticate(request, username=None, password=None): try: user = User.objects.get( Q(phone_number=username) | Q(email=username) ) except User.DoesNotExist: return None if user and check_password(password, user.password): return user return None@staticmethoddef get_user(user_id): try: return User.objects.get(pk=user_id) except User.DoesNotExist: return NoneUpdate the settings (3 Options)settings.pyINSTALLED_APPS = [...# Third-party apps'accounts.apps.AccountsConfig',...]AUTH_USER_MODEL = 'accounts.User'AUTHENTICATION_BACKENDS = ['accounts.backends.EmailPhoneUsernameAuthenticationBackend']I hope your problem will be solved and others will use it.
Multi Layer Perceptron Deep Learning in Python using Pytorch I am having errors in executing the train function of my code in MLP.This is the error:mat1 and mat2 shapes cannot be multiplied (128x10 and 48x10)My code for the train function is this:class net(nn.Module):def __init__(self, input_dim2, hidden_dim2, output_dim2): super(net, self).__init__() self.input_dim2 = input_dim2 self.fc1 = nn.Linear(input_dim2, hidden_dim2) self.relu = nn.ReLU() self.fc2 = nn.Linear(hidden_dim2, hidden_dim2) self.fc3 = nn.Linear(hidden_dim2, output_dim2) def forward(self, x): x = self.fc1(x) x = self.relu(x) x = self.fc2(x) x = self.relu(x) x = self.fc3(x) x = F.softmax(self.fc3(x)) return xmodel = net(input_dim2, hidden_dim2, output_dim2) #create the networkcriterion = nn.CrossEntropyLoss()optimizer = torch.optim.RMSprop(model.parameters(), lr = learning_rate2)def train(num_epochs2):for i in range(num_epochs2): tmp_loss = [] for (x,y) in train_loader: print(y.shape) print(x.shape) outputs = model(x) #forward pass print(outputs.shape) loss = criterion(outputs, y) #loss computation tmp_loss.append(loss.item()) #recording the loss optimizer.zero_grad() #all the accumulated gradient loss.backward() #auto-differentiaton - accumulation of gradient optimizer.step() # a gradient step print("Loss at {}th epoch: {}".format(i, np.mean(tmp_loss))) I don't know where I'm wrong. My code seems to work okay.
From the limited message, I guess the place you are wrong are the following snippets:x = self.fc3(x) x = F.softmax(self.fc3(x))Try to replace with:x = self.fc3(x) x = F.softmax(x)A good question should include: error backtrace information and complete toy example which could repeat the errors!
Read Excel with multiple headers and unnamed column I recieve some Excel files like that : USA UK plane cars plane cars 2016 2 7 1 3 # a comment after the last country2017 3 1 8 4 There is an unknown amount of countries and there can be a comment after the last column.When I read the Excel file like that...df = pd.read_excel( sourceFilePath, sheet_name = 'Sheet1', index_col = [0], header = [0, 1])... I have a value error :ValueError: Length of new names must be 1, got 2The problem is I cannot use the usecols param because I don't know how many countries there is before reading my file.How can I read such a file ?
It's possible Pandas won't be able to fix your special use case, but you can write a program that fixes the spreadsheet using openpyxl. It has really clear documentation, but here's an overview of how to use it:import openpyxl as xlwb = xl.load_workbook("ExampleSheet.xlsx")for sheet in wb.worksheets: print("Sheet Title => {}".format(sheet.title)) print("Dimensions => {}".format(sheet.dimensions)) # just returns a string print("Columns: {} <-> {}".format(sheet.min_column, sheet.max_column)) print("Rows: {} <-> {}".format(sheet.min_row, sheet.max_row)) for r in range(sheet.min_row, sheet.max_row + 1): for c in range(sheet.min_column, sheet.max_column + 1): if (sheet.cell(r,c).value != None): print("Cell {}:{} has value {}".format(r,c,sheet.cell(r,c).value))
Convert numpy rows into columns based on ID Suppose I have a numpy array that maps between IDs of two item types:[[1, 12], [1, 13], [1, 14], [2, 13], [2, 14], [3, 11]]I would like to rearrange this array such that each row in the new array represents all items that matched the same ID in the original array. Here, each column would represent one of the mappings in the original array, up to a specified shape restriction on the number of columns in the new array. If we wanted to obtain this result from the above array, ensuring we only had 2 columns, we would obtain:[[12, 13], #Represents 1 - 14 was not kept as only 2 columns are allowed [13, 14], #Represents 2 [11, 0]] #Represents 3 - 0 was used as padding since 3 did not have 2 mappingsThe naΓ―ve approach here would be to use a for-loop that populates the new array as it encounters rows in the original array. Is there a more efficient means of accomplishing this with numpy's functionality?
Here is a general and mostly Numpythonic approach:In [144]: def array_packer(arr): ...: cols = arr.shape[1] ...: ids = arr[:, 0] ...: inds = np.where(np.diff(ids) != 0)[0] + 1 ...: sp = np.split(arr[:,1:], inds) ...: result = [np.unique(a[: cols]) if a.shape[0] >= cols else ...: np.pad(np.unique(a), (0, (cols - 1) * (cols - a.shape[0])), 'constant') ...: for a in sp] ...: return result ...: ...: Demo:In [145]: a = np.array([[1, 12, 15, 45], ...: [1, 13, 23, 9], ...: [1, 14, 14, 11], ...: [2, 13, 90, 34], ...: [2, 14, 23, 43], ...: [3, 11, 123, 53]]) ...: In [146]: array_packer(a)Out[146]: [array([ 9, 11, 12, 13, 14, 15, 23, 45, 0, 0, 0]), array([13, 14, 23, 34, 43, 90, 0, 0, 0, 0, 0, 0]), array([ 11, 53, 123, 0, 0, 0, 0, 0, 0, 0, 0, 0])]In [147]: a = np.array([[1, 12, 15], ...: [1, 13, 23], ...: [1, 14, 14], ...: [2, 13, 90], ...: [2, 14, 23], ...: [3, 11, 123]]) ...: ...: ...: In [148]: array_packer(a)Out[148]: [array([12, 13, 14, 15, 23]), array([13, 14, 23, 90, 0, 0]), array([ 11, 123, 0, 0, 0, 0])]
Errors loading JSON with Flask and Angular I've solved my issue, but I'd like to know what was going wrong so I can address it in the future. I'm having issues decoding incoming JSON for use in my Flask application.The code that sends it in Angular:$http.post("/login", JSON.stringify($scope.loginForm)) .success(function(data, status, headers, config) { console.log(data); }) .error(function(data, status, headers, config) { console.log("Submitting form failed!"); });Important to note that the request type is set to application/json earlier up, with$http.defaults.headers.post["Content-Type"] = "application/json";The code that receives it within Flask:data = request.get_json()email_address = data.get("email_address")password = data.get("password")Attempting to load it this way returns an error 400, but any other way leads to some very strange issues. For example:return json.dumps(request.get_json())Will log {"password": "password", "email_address": "[email protected]"} in the console, but attempting to do this:data = request.get_json()email_address = data.get("email_address")password = data.get("password")With no difference whatsoever between this and the first block of code except that I'm not forcing it, I receive the exception "ValueError: need more than 1 value to unpack". Which implies that there aren't two values to unpack.HOWEVER, they both work individually. If I do the above request and omit either of the data.get() lines above, the other will work.What about my setup causes my JSON object to disintegrate the first time it's accessed?I got around this by using request.json instead of request.get_json() but as request.json is being deprecated it's fairly important I know how to solve this in the future. Any pointers would be appreciated!
You can omit JSON.stringify and pass object directly to $http.post() method because angular will serialize it to JSON automatically it formData is object. So I assume that JSON.stringify will force angular to send is as x-www-form-urlencoded instead of application/json media type.See default transformations section: angular $http service documentation
Accessing python generators in parallel using multiprocessing module I have a Python generator which pulls in a pretty huge table from a data warehouse. After pulling in the data, I am processing the data using celery in a distributed manner. After testing I realized the the generator is the bottleneck. It can't produce enough tasks for celery workers to work on. This is when I have decided to optimize my python generator.More details on the generator The generator hits the data warehouse with chunk queries and these query results are basically independent of each other and stateless. So I thought this is a good candidate for making it parallel using the multiprocessing module. I looked around how to parallelize generators fetch without much direction. So if my Python generator generates stateless chunks of data, this should be a good candidate for multiprocessing right? Are there any ways to parallelize python generators? Also are there any side effects which I should be aware of using parallelism in Python generators?
I think you may be trying to solve this problem at the wrong level of abstraction. Python generators are inherently stateful, and thus you can't split a generator across processes without some form of synchronization, and that will kill any performance gains that you might achieve through parallelism. I would recommend instead creating separate generators for each process and having them start at some offset from each other.For example if you have 4 processes, you basically have the first process handle the first chunk and then it process the 5th chunk followed by the 9th chunk and so on adding N where N is the number of processes that you've setup. This requires you to hand off a unique index to each of the processes at startup.
How do you pass a generated PDF to an API as an IO stream without writing it to disk? I am using PyPDF2 to generate a PDF, and I would like to upload this PDF to Cloudinary, which accepts images as IO objects.The example from their docs: cloudinary.uploader.upload(open('/tmp/image1.jpg', 'rb'))In my application, I instantiate a PdfFileWriter and add pages:output = PyPDF2.PdfFileWriter()output.addPage(page)Then I can save the generated PDF locally:outputStream = file(destination_file_name, "wb")output.write(outputStream)outputStream.close()But obviously I'm trying to avoid this. Instead I'm trying to send an IO object to cloudinary:image_StringIO_object = StringIO.StringIO()output.write(image_StringIO_object)cloudinary.uploader.upload(image_StringIO_object, api_key=CLOUDINARY_API_KEY, api_secret=CLOUDINARY_API_SECRET, cloud_name=CLOUDINARY_CLOUD_NAME, format="PDF")This returns the error:Empty fileIf instead I try to pass the value of the StringIO object:cloudinary.uploader.upload(image_StringIO_object.getvalue(), ...)I get the error:file() argument 1 must be encoded string without null bytes, not str
Got the answer from Cloudinary support:The result from getvalue() on the StringIO object needs to be base64 encoded and prepended with a tag: out = StringIO.StringIO()output.write(out)cloudinary.uploader.upload("data:image/pdf;base64," + base64.b64encode(out.getvalue()))
What does DeprecationWarning mean when running Python I ran a Python program and got a DeprecationWarning, like:D:\programs\anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) I can't confirm what is wrong about it. What is a DeprecationWarning?
In general developers are developing libraries and in developing sometime add o change thing and sometime remove theme. removing is danger because user may used that and if a developer want to remove a thing first have to notify others to don't use this feature or things and after this he can remove. and DeprecationWarning is this notification.
I have a ModuleNotFoundError: No module named 'Crypto' Please need helpWhen I try to run myfile.py it says, ModuleNotFoundError: No module named 'Crypto' ,but I already have the path for 'conda' and 'anaconda3' in my environment variables, I also have uninstalled and reinstalled pycryptodome yet it still says ModuleNotFoundError: No module named 'Crypto'.This is what I used in myfile.pyfrom Crypto import Random from Crypto.Cipher import AESimport builtinsimport osimport os.pathfrom os import listdirfrom os.path import isfile, joinimport timealso had these packages installed:alabaster 0.7.12anaconda-client 1.7.2anaconda-navigator 1.9.12anaconda-project 0.8.3argh 0.26.2asn1crypto 1.3.0astroid 2.4.2astropy 4.0.1.post1atomicwrites 1.4.0attrs 19.3.0autopep8 1.5.3Babel 2.8.0backcall 0.2.0backports.functools-lru-cache 1.6.1backports.shutil-get-terminal-size 1.0.0backports.tempfile 1.0backports.weakref 1.0.post1bcrypt 3.1.7beautifulsoup4 4.9.1bitarray 1.4.0bkcharts 0.2bleach 3.1.5bokeh 2.1.1boto 2.49.0Bottleneck 1.3.2brotlipy 0.7.0certifi 2020.6.20cffi 1.14.0chardet 3.0.4click 7.1.2cloudpickle 1.5.0clyent 1.2.2colorama 0.4.3comtypes 1.1.7conda 4.8.3conda-build 3.18.11conda-package-handling 1.7.0conda-verify 3.4.2contextlib2 0.6.0.post1cryptography 2.9.2cycler 0.10.0Cython 0.29.21cytoolz 0.10.1dask 2.20.0decorator 4.4.2defusedxml 0.6.0diff-match-patch 20200713distributed 2.20.0docutils 0.16entrypoints 0.3et-xmlfile 1.0.1fastcache 1.1.0filelock 3.0.12flake8 3.8.3Flask 1.1.2fsspec 0.7.4future 0.18.2gevent 20.6.2glob2 0.7gmpy2 2.0.8greenlet 0.4.16h5py 2.10.0HeapDict 1.0.1html5lib 1.1idna 2.10imageio 2.9.0imagesize 1.2.0importlib-metadata 1.7.0intervaltree 3.0.2ipykernel 5.3.2ipython 7.16.1ipython-genutils 0.2.0ipywidgets 7.5.1isort 4.3.21itsdangerous 1.1.0jdcal 1.4.1jedi 0.17.1Jinja2 2.11.2joblib 0.16.0json5 0.9.5jsonschema 3.2.0jupyter 1.0.0jupyter-client 6.1.6jupyter-console 6.1.0jupyter-core 4.6.3jupyterlab 2.1.5jupyterlab-server 1.2.0keyring 21.2.1kiwisolver 1.2.0lazy-object-proxy 1.4.3libarchive-c 2.9llvmlite 0.33.0+1.g022ab0flocket 0.2.0lxml 4.5.2MarkupSafe 1.1.1matplotlib 3.2.2mccabe 0.6.1menuinst 1.4.16mistune 0.8.4mkl-fft 1.1.0mkl-random 1.1.1mkl-service 2.3.0mock 4.0.2more-itertools 8.4.0mpmath 1.1.0msgpack 1.0.0multipledispatch 0.6.0navigator-updater 0.2.1nbconvert 5.6.1nbformat 5.0.7networkx 2.4nltk 3.5nose 1.3.7notebook 6.0.3numba 0.50.1numexpr 2.7.1numpy 1.18.5numpydoc 1.1.0olefile 0.46openpyxl 3.0.4packaging 20.4pandas 1.0.5pandocfilters 1.4.2paramiko 2.7.1parso 0.7.0partd 1.1.0path 13.1.0pathlib2 2.3.5pathtools 0.1.2patsy 0.5.1pep8 1.7.1pexpect 4.8.0pickleshare 0.7.5Pillow 7.2.0pip 20.1.1pkginfo 1.5.0.1pluggy 0.13.1ply 3.11prometheus-client 0.8.0prompt-toolkit 3.0.5psutil 5.7.0py 1.9.0pycodestyle 2.6.0pycosat 0.6.3pycparser 2.20pycryptodomex 3.9.9pycurl 7.43.0.5pydocstyle 5.0.2pyflakes 2.2.0Pygments 2.6.1pylint 2.5.3PyNaCl 1.4.0pyodbc 4.0.0-unsupportedpyOpenSSL 19.1.0pyparsing 2.4.7pyreadline 2.1pyrsistent 0.16.0PySocks 1.7.1pytest 5.4.3python-dateutil 2.8.1python-jsonrpc-server 0.3.4python-language-server 0.34.1pytz 2020.1PyWavelets 1.1.1pywin32 227pywin32-ctypes 0.2.0pywinpty 0.5.7PyYAML 5.3.1pyzmq 19.0.1QDarkStyle 2.8.1QtAwesome 0.7.2qtconsole 4.7.5QtPy 1.9.0regex 2020.6.8requests 2.24.0rope 0.17.0Rtree 0.9.4ruamel-yaml 0.15.87scikit-image 0.16.2scikit-learn 0.23.1scipy 1.5.0seaborn 0.10.1Send2Trash 1.5.0setuptools 49.2.0.post20200714simplegeneric 0.8.1singledispatch 3.4.0.3sip 4.19.13six 1.15.0snowballstemmer 2.0.0sortedcollections 1.2.1sortedcontainers 2.2.2soupsieve 2.0.1Sphinx 3.1.2sphinxcontrib-applehelp 1.0.2sphinxcontrib-devhelp 1.0.2sphinxcontrib-htmlhelp 1.0.3sphinxcontrib-jsmath 1.0.1sphinxcontrib-qthelp 1.0.3sphinxcontrib-serializinghtml 1.1.4sphinxcontrib-websupport 1.2.3spyder 4.1.4spyder-kernels 1.9.2SQLAlchemy 1.3.18statsmodels 0.11.1sympy 1.6.1tables 3.6.1tblib 1.6.0terminado 0.8.3testpath 0.4.4threadpoolctl 2.1.0toml 0.10.1toolz 0.10.0tornado 6.0.4tqdm 4.47.0traitlets 4.3.3typing-extensions 3.7.4.2ujson 1.35unicodecsv 0.14.1urllib3 1.25.9watchdog 0.10.3wcwidth 0.2.5webencodings 0.5.1Werkzeug 1.0.1wheel 0.34.2widgetsnbextension 3.5.1win-inet-pton 1.1.0win-unicode-console 0.5wincertstore 0.2wrapt 1.11.2xlrd 1.2.0XlsxWriter 1.2.9xlwings 0.19.5xlwt 1.3.0xmltodict 0.12.0yapf 0.30.0zict 2.0.0zipp 3.1.0zope.event 4.4zope.interface 4.7.1
Your package list seems to be missing the required library (pyCrypto).Not too familiar with conda, but try:$ conda install pyCryptoHere is an example with just python:root@47cbabc35dca:/# pythonPython 3.9.0 (default, Nov 18 2020, 13:28:38)[GCC 8.3.0] on linuxType "help", "copyright", "credits" or "license" for more information.>>> from Crypto import RandomTraceback (most recent call last): File "<stdin>", line 1, in <module>ModuleNotFoundError: No module named 'Crypto'>>>root@47cbabc35dca:/# python -m pip install pyCryptoCollecting pyCrypto Downloading pycrypto-2.6.1.tar.gz (446 kB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 446 kB 3.0 MB/sBuilding wheels for collected packages: pyCrypto Building wheel for pyCrypto (setup.py) ... done Created wheel for pyCrypto: filename=pycrypto-2.6.1-cp39-cp39-linux_x86_64.whl size=521774 sha256=efc58b4ee8a1b8404b06b4d5aa75e51ebe90e6eec90e68f00b1e805582fcbc74 Stored in directory: /root/.cache/pip/wheels/9d/29/32/8b8f22481bec8b0fbe7087927336ec167faff2ed9db849448fSuccessfully built pyCryptoInstalling collected packages: pyCryptoSuccessfully installed pyCrypto-2.6.1root@47cbabc35dca:/# pythonPython 3.9.0 (default, Nov 18 2020, 13:28:38)[GCC 8.3.0] on linuxType "help", "copyright", "credits" or "license" for more information.>>> from Crypto import Random>>>
How to harness for loop with functionality of multiple buttons I am stuck with code below. Either I cannot find simple answer to my problem due to not narrow enough search or I am just too blind to see. Anyway I am looking to put the "+" and "-" buttons to use. They suppose to literally do what their assigned symbols do.With my level of python knowledge I can only achieve that by creating single function to each button which is a lot of code. I wonder if it is possible to create loop which could save tons of code and still be able to update label called "stock" in the same row as pressed button. At the moment I have assigned random numbers to that label, but in a bigger scope that label will be populated by integers taken from db.I will be very grateful if anyone could point me into right direction.import tkinter as tkfrom tkinter import Tkimport randomroot = tk.Tk()my_list=dict(AAA=["aa1", "aa2", "aa3"], BBB=["ab1", "ab2", "ab3", "ab4", "ab5"], CCC=["ac1", "ac2", "ac3", "ac4", "ac5", "ac6"], DDD=["ad1", "ad2", "ad3", "ad4", "ad5", "ad6"], EEE=["ae1", "ae2", "ae3", "ae4", "ae5", "ae6"], FFF=["af1", "af2", "af3", "af4", "af5", "af6"], GGG=["ag1", "ag2", "ag3", "ag4", "ag5", "ag6"], HHH=["ah1", "ah2", "ah3", "ah4", "ah5", "ah6"])for x, y in enumerate(my_list): xyz=x*4 tk.Label(root, text=y, width=25, bd=3, relief=tk.GROOVE).grid(row=0, column=xyz,columnspan=4,padx=(0,10)) for xing, ying in enumerate(my_list[y]): tk.Label(root, text=ying, width=10,relief=tk.SUNKEN).grid(row=xing+1, column=xyz) stock=tk.Label(root,text=random.randint(0,9), width=5,relief=tk.SUNKEN) stock.grid(row=xing+1, column=xyz+1) tk.Button(root, text="+", width=3).grid(row=xing+1, column=xyz+2) tk.Button(root, text="-", width=3).grid(row=xing+1, column=xyz+3,padx=(0,10))root.mainloop()
Since you are only using 2 buttons for this, I think having 2 functions won't be too bad. However, if you would have more than 2 buttons, I think a lambda function would be good. I'm not sure what your code is trying to accomplish here but my best guess is that you are trying to add and subtract numbers? I made a calculator gui a while back. For each button you would use a lambda function. I made a general function called expression(a) which would take a as a string and add it to the final output. Each button had the command of command=lambda: expression(symbol) so the + button would be command=lambda: expression('+') I think this will help you out.
Kivy Widgets doesn't shows up Im getting a strange error. I can only see black window without text label. I've also tried many other widgets and gets the same bug!It came up after I fixed -Kivy does not detect OpenGL 2.0 by setting environmental variableKIVY_GL_BACKEND = angle_sdl2I even tried uninstalling python3.7.3 and installed python3.7.7, Then installed Kivy according to Kivy official docs still the same issue:https://kivy.org/doc/stable/installation/installation-windows.html#installationHere is the code:from kivy.uix.label import Labelfrom kivy.app import Appimport kivy# the minimum OpenGL version supported by Kivy:kivy.require('1.9.1')# defining app class:class HelloKivy(App): def build(self): return Label(text="Hello, Kivy")# running the window:HelloKivy().run()console log:[INFO ] [Logger ] Record log in C:\Users\AVD\.kivy\logs\kivy_20-04-20_66.txt[INFO ] [deps ] Successfully imported "kivy_deps.gstreamer" 0.1.17[INFO ] [deps ] Successfully imported "kivy_deps.angle" 0.1.9[INFO ] [deps ] Successfully imported "kivy_deps.glew" 0.1.12[INFO ] [deps ] Successfully imported "kivy_deps.sdl2" 0.1.22[INFO ] [Kivy ] v1.11.1[INFO ] [Kivy ] Installed at "C:\Program Files\Python 3.7.3\lib\site-packages\kivy\__init__.py"[INFO ] [Python ] v3.7.3 (v3.7.3:ef4ec6ed12, Mar 25 2019, 22:22:05) [MSC v.1916 64 bit (AMD64)][INFO ] [Python ] Interpreter at "C:\Program Files\Python 3.7.3\python.exe"[INFO ] [Factory ] 184 symbols loaded[INFO ] [Image ] Providers: img_tex, img_dds, img_sdl2, img_pil, img_gif (img_ffpyplayer ignored)[INFO ] [Text ] Provider: sdl2[INFO ] [Window ] Provider: sdl2[INFO ] [Window ] Activate GLES2/ANGLE context[INFO ] [GL ] Using the "OpenGL" graphics system[INFO ] [GL ] Backend used <angle_sdl2>[INFO ] [GL ] OpenGL version <b"OpenGL ES 2.0 (ANGLE 2.1.0.b'')">[INFO ] [GL ] OpenGL vendor <b'Google Inc.'>[INFO ] [GL ] OpenGL renderer <b'ANGLE (Intel(R) HD Graphics Direct3D11 vs_4_1 ps_4_1)'>[INFO ] [GL ] OpenGL parsed version: 2, 0[INFO ] [GL ] Shading version <b"OpenGL ES GLSL ES 1.00 (ANGLE 2.1.0.b'')">[INFO ] [GL ] Texture max size <8192>[INFO ] [GL ] Texture max units <16>[INFO ] [Window ] auto add sdl2 input provider[INFO ] [Window ] virtual keyboard not allowed, single mode, not docked[INFO ] [Base ] Start application main loop[INFO ] [GL ] NPOT texture support is available[INFO ] [WindowSDL ] exiting mainloop and closing.[INFO ] [Base ] Leaving application in progress...Process finished with exit code 0Ouput Window:Image of output window which shows only a black background without showing the text labelI have also posted the thread on many other forums:on Python Forum: https://python-forum.io/Thread-Kivy-Kivy-text-label-won-t-shows-upon Sololearn: https://www.sololearn.com/Discuss/2271231/kivy-text-label-won-t-shows-upon Reddit: https://www.reddit.com/r/kivy/comments/geeyyd/kivy_widgets_wont_shows_up/?utm_source=share&utm_medium=web2xEdit: Thanks to all for the help!
instead of OpenGL, use sdl2import osos.environ['KIVY_TEXT'] = 'sdl2'os.environ['KIVY_IMAGE'] = 'sdl2'
How could I store lambda functions inside a dictionary in python? Someone shared their code and I saw a bunch of functions that were stored in what seemed to me to be a dictionary and. So, I liked the idea and I borrowed them. The code that the person wrote it in was in JS, and I work with Python, so I translated the code into Python. Here is what that person wrote in JS:EasingFunctions = { // no easing, no acceleration linear: t => t, // accelerating from zero velocity easeInQuad: t => t * t, // decelerating to zero velocity easeOutQuad: t => t * (2 - t), // acceleration until halfway, then deceleration easeInOutQuad: t => t < .5 ? 2 * t * t : -1 + (4 - 2 * t) * t, // accelerating from zero velocity easeInCubic: t => t * t * t, // decelerating to zero velocity easeOutCubic: t => (--t) * t * t + 1, // acceleration until halfway, then deceleration easeInOutCubic: t => t < .5 ? 4 * t * t * t : (t - 1) * (2 * t - 2) * (2 * t - 2) + 1, // accelerating from zero velocity easeInQuart: t => t * t * t * t, // decelerating to zero velocity easeOutQuart: t => 1 - (--t) * t * t * t, // acceleration until halfway, then deceleration easeInOutQuart: t => t < .5 ? 8 * t * t * t * t : 1 - 8 * (--t) * t * t * t, // accelerating from zero velocity easeInQuint: t => t * t * t * t * t, // decelerating to zero velocity easeOutQuint: t => 1 + (--t) * t * t * t * t, // acceleration until halfway, then deceleration easeInOutQuint: t => t < .5 ? 16 * t * t * t * t * t : 1 + 16 * (--t) * t * t * t * t}And it works fine if you ran this code. However in the code I translated, it gave me an error saying that I miss a paranthesis, comma, or a colon. Here is the code:EasingFunctions = { # no easing, no acceleration linear: lambda t : t, # accelerating from zero velocity easeInQuad: lambda t : t ** 2, # decelerating to zero velocity easeOutQuad: lambda t : t * (2-t), # acceleration until halfway, then deceleration easeInOutQuad: (lambda t : t = (2*(t**2)) if t < 0.5 else ((-1+(4-2*t)) * t)), # accelerating from zero velocity easeInCubic: lambda t : t * t * t, # decelerating to zero velocity easeOutCubic: lambda t : (t-1) * t * t + 1, # acceleration until halfway, then deceleration easeInOutCubic: lambda t : t = 4*t*t*t if t < 0.5 else (t - 1) * (2 * t - 2) * (2 * t - 2) + 1, # accelerating from zero velocity easeInQuart: lambda t : t ** 4, # decelerating to zero velocity easeOutQuart: lambda t : 1 - (t-1) * t * t * t, # acceleration until halfway, then deceleration easeInOutQuart: lambda t : t = 8 * t * t * t * t if t < 0.5 else 1 - 8 * (t) * t * t * t # accelerating from zero velocity easeInQuint: lambda t : t ** 5, # decelerating to zero velocity easeOutQuint: lambda t : 1 + (t-1) * t * t * t * t, # acceleration until halfway, then deceleration easeInOutQuint: lambda t : t = 16 * t * t * t * t * t if t < 0.5 else 1 + 16 * (t-1) * t * t * t * t }And what confused me is that the error was indicated to be the first key value that had an if statement in it. I thought that this was allowed in Python, what is wrong with the code?
As you mentioned in the comments that you still can't figure out how to do it using string keys for dictionary, I'm posting this answer. Though, it was partially mentioned in the comments how to do this.a = { 'linear': lambda t: t, 'easeInQuad': lambda t: t ** 2, 'easeOutQuad': lambda t: t * (2-t), 'easeOutQuint': lambda t: 1 + (t - 1) * t * t * t * t,}print(a['linear'](69))print(a['easeInQuad'](69))print(a['easeOutQuad'](69))print(a['easeOutQuint'](69))Result:694761-46231541364229Again, as mentioned in comments, Python doesn't support -- operation. Hope this helps.
Using a function define var and then passing to a constructor I want client code to call a function, that assigns user inputted values to variables and pass them to an object to be used as user defined attributes.But I'm not able to call any methods for the objectI've tried including the methods for the object in the function, but it doesn't seem to change anything, same error.def function(): a = input("blahblah") the_object = foo(a)class foo(object): def __init__(a): self.a = a #This works fine print(self.a) def DoThing(): #This does not print(self.a)#Mainfunction()the_object.DoThing()I can see that the function is called, and the object is created.But when I try to call any methods, I keep getting the errorNameError: name 'the_object' is not defined
def function(): a = input("blahblah") return foo(a)class foo(object): def __init__(object): self.a = object print(self.a) def DoThing(): print(self.a)#Mainthe_object = function()the_object.DoThing()orclass foo(object): def __init__(object): self.a = object print(self.a) def DoThing(): print(self.a)the_object = foo("blahblah")the_object.DoThing()
Find diagonal without cycle or diagonal I was trying to find the diagonal of matrix b without using cycle or diag. I got an error: 'numpy.ndarray' object has no attribute 'index'. Not sure how to fix this.b = np.random.randint(low = 1, high = 11, size = (10,10))print(list(map(lambda x: x[a.index(x)], a)))
The numpy ndarray object doesn't behave exactly like a python list, specifically, as the error specifies, it does not have an 'index' function. Here is one way to go around this:You can first convert b from a numpy ndarray to a standard python list, in the following way:b = b.tolist()Then, the code you wrote would work.print(list(map(lambda x: x[b.index(x)], b)))
Python how to make datetime update time Im using datetime with pytz, but i cant get time to update.format = "[%B %d %H:%M]"now_utc = datetime.now(timezone('UTC'))greece = now_utc.astimezone(timezone('Europe/Athens'))date = greece.strftime(format)For example i print(date) at 11:30, it stays like that.Any idea?
As it is, date remains the same throughout the runtime. There is nothing to update it at the current time. If you want to check and print the time at regular intervals, you need to define a function and have your script call it after that amount of time.import timefmt = "[%B %d %H:%M]"def print_now() now_utc = datetime.now(timezone('UTC')) greece = now_utc.astimezone(timezone('Europe/Athens')) date = greece.strftime(fmt) print(date)while True: print_now() time.sleep(60) # argument is time to wait in secondsAs long as True is True (which is always), the loop will continue, unless you define some condition to force it to end at some point. Of course, you could have the print_now() function contents within the while loop, but it's a bit cleaner to have it in it's own function.
Understanding module & absolute / relative package imports I have created a package containing sub-folders and I would like to include a parent module from a sub-package module. I have tried to follow the project structure suggested here https://docs.python-guide.org/writing/structure/ and attempted to replicate the step-by-step procedure as listed here http://zetcode.com/lang/python/packages/ but it seems that I am missing something obvious about python's package systemHere's my project structure watches/-- ...-- watches/---- __init__.py (empty)---- Logger.py---- main.py---- db/------ __init__.py (empty)------ EntryPoint.pyLogger.py contains a single class : class Logger: ...I try to import Logger.py's class and methods from db/EntryPoint.py as follow : from watches.Logger import Loggerclass EntryPoint: ...Then, I want to wrap-up everything in main.py as follow:from db.EntryPoint import EntryPointif __name__ == "__main__": t = EntryPoint("local")and finally, when I try to execute main.py as follow python3 main.py (so I am located in watches/watches directory as you can guess), I guet the following error stack trace :Traceback (most recent call last): File "main.py", line 1, in <module> from db.EntryPoint import EntryPoint File "some/absolute/path/watches/watches/db/EntryPoint.py", line 4, in <module> from watches.Logger import LoggerModuleNotFoundError: No module named 'watches'
Every import will be relative to the location where the script is being run, in your case, main.py. So, the point of view of your program is:-logger.py-__init__.py-db/---__init__.pt---EntryPoint.pyThe program is not aware that he is an module called watches, so if you want to import the logger.py in your main, simply do:from Logger import LoggerOr move your main to the parent folder.
How to 'save as' an edited image (png) using a file dialog in tkinter and Pil in Python I am trying to create an image editor that adds text to an image using pillow. My problem is with saving my edited image so that the user can choose the name of the save file by opening up a save-as dialog. Looking at other questions and answers, I came up with this:def onOpen(self): im = Image.open(askopenfilename()) caption = simpledialog.askstring("Label", "What would you like the label on your picture to say?") fontsize = 15 if im.mode != "RGB": im = im.convert("RGB") draw = ImageDraw.Draw(im) font = ImageFont.truetype("arial.ttf", fontsize) draw.text((0, 0),str(caption),(255,0,0),font=font) file = filedialog.asksaveasfile(mode='w', defaultextension=".png") if file: file.write(im) file.close()However, I get the following error when running it:Exception in Tkinter callbackTraceback (most recent call last): File "C:\Users\Renee\AppData\Local\Programs\Python\Python35-32\lib\tkinter\__init__.py", line 1550, in __call__ return self.func(*args) File "C:\Users\Renee\AppData\Local\Programs\Python\Python35-32\tkinterguitest.py", line 52, in onOpen file.write(im)TypeError: write() argument must be str, not ImageI know the issue is that write can only be used with strings, so is there a command like file.write but for images?Thanks!
You should save the image through the save method that in Image object:file = filedialog.asksaveasfile(mode='w', defaultextension=".png")if file: im.save(file) # saves the image to the input file name.
Find one line and get the next lines With this code I get the full line which includes Name. But I need to get this line AND the next 2 lines. I have no clue how I can do this.def daten(s): for i in s: if i.find('Name') >= 1: daten = i return datenExample:AAA Name CCC DDDEEEI want to get Name, CCC, and DDD
This is another solution - if s is a file:def daten(s): with open(s, 'r') as f: lines = f.read().splitlines() for i,line in enumerate(lines): if 'Name' in line: return lines[i:i+3]It searches for the word 'Name' in any of the lines and if find it - returns a list with the word and the two words following.
Determining whether a Pandas df column is an array I want to see if a column in my dataframe is an actual list type in python. Here is what I'm currently doing:is_list_field = all([isinstance(_val, list) for _val in df.iloc[:,1] if _val])Does the above seem like it covers all scenarios (nan? empty string, null, etc.), or is there a better way to do this?
Not fast but at least work df.applymap(lambda x : type(x)==list).all()A FalseB Truedtype: boolData Inputdf=pd.DataFrame({'A':[1,2],'B':[[1,2],[1,2]]})
How to check if given data exist or Not in mongodb and python I used below code.from pymongo import MongoClientclient=MongoClient()db=client.mydbif db.mycollections.find({"name": 'Chinna',"password":'chinna11'}).count() > 0: print("true")else: print("false")but it return below an error,DeprecationWarning: count is deprecated. Use Collection.count_documents instead.if anyone know, please help..
Assuming you're not interested in the documents and only what to count the matching ones, replace .find() with .count_documents():from pymongo import MongoClientclient=MongoClient()db=client.mydbif db.mycollections.count_documents({"name": 'Chinna',"password":'chinna11'}) > 0: print("true")else: print("false")
Generating matches for 5th, 7th, 9th etc. Places I have a method that generates matches for each best of two teams from group stage. Group AT1───┐ β”‚T2β”€β”€β”€β”˜ β”œβ”€β”€β”€β”T3───┐ β”‚ β”œβ”€β”€β”€T1 β”‚ β”‚ β”‚T4β”€β”€β”€β”˜ β”‚ β”œβ”€β”€β”€T6Group B β”œβ”€β”€β”€β”‚T5───┐ β”‚ β”œβ”€β”€β”€T2 β”‚ β”‚ β”‚T6β”€β”€β”€β”˜ β”‚ β”œβ”€β”€β”€T5 β”œβ”€β”€β”€β”˜T7───┐ β”‚T8β”€β”€β”€β”˜def generate_final_stage(advanced_teams): teams_from_group_stage = advanced_teams matches = [] for i in range(len(teams_from_group_stage)): teams = [] if i != len(teams_from_group_stage) - 1: team_1 = teams_from_group_stage[i][0] team_2 = teams_from_group_stage[i + 1][1] teams.append(team_1) teams.append(team_2) else: team_1 = teams_from_group_stage[i][0] team_2 = teams_from_group_stage[0][1] teams.append(team_1) teams.append(team_2) matches.append([teams[0], teams[1]]) return matchesdef main(): # Possible Inputs advanced_teams = [[1, 2, 3], [4, 5, 6]] advanced_teams2 = [[1, 2, 3, 4], [5, 6, 7, 8]] advanced_teams3 = [[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]] schedule = generate_final_stage(advanced_teams3) print(schedule)if __name__ == "__main__": main()I would like to improve this script so that it also generates matches for subsequent places. If 3, 4, 5 or more teams go to PlayOff, this script should generate the games accordingly. If the number of teams from the group phase is not odd, games must be generated as follows: Best team vs. worst team. For example:1st. Team from Group A vs. 2nd Team from Group B1st. Team from Group B vs. 2nd Team from Group Awhen 3 teams go from the group stage to the PlayOff.3rd. Team from Group A vs. 3rd Team from Group Bwhen 4 teams go from the group stage to the PlayOff.3rd. Team from Group A vs. 4th Team from Group B3st. Team from Group B vs. 4th Team from Group Awhen 5 teams go from the group stage to the PlayOff.5th. Team from Group A vs. 5th Team from Group Band so onFor example from inpunt of [[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]] I expect following output:[[1, 7], [6, 2], [3, 9], [4, 8], [5, 10]]The amount of groups is also dynamic there can be 2, 4, 6, 8, 10 and so on groups. In this case first two groups plays with each other and next two the same and next and next[[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18], [19, 20, 21, 22, 23, 24]]The output should be:[(1, 8), (7, 2), (3, 10), (9, 4), (5, 12), (11, 6), (13, 20), (14, 19), (15, 22), (16, 21), (17, 24), (18, 23)]
You can use recursion to get the desired result in case of N-teamsΒΉ in M-groupsΒ²:def playoff2g(g1, g2, r): """ Get matches for 2 groups """ if len(g1) > 1: r.extend([(g1[0], g2[1]), (g2[0], g1[1])]) playoff2g(g1[2:], g2[2:], r) elif len(g1) == 1: r.append((g1[0], g2[0])) return rdef playoff(gs): """ Get matches for multiple number of groups """ res = [] for i in range(0, len(gs)-1, 2): res = playoff2g(gs[i], gs[i+1], res) return resgroups = [[1, 2, 3, 4, 5, 6], [7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18], [19, 20, 21, 22, 23, 24]]result = playoff(groups)Output:[(1, 8), (7, 2), (3, 10), (9, 4), (5, 12), (11, 6), (13, 20), (19, 14), (15, 22), (21, 16), (17, 24), (23, 18)]ΒΉ number of teams should be the same in all groupsΒ² number of groups should be even (2,4,6...)
Python: How to slice string using string? Assuming that the user entered:"i like eating big apples"Want to remove "eating" and "apples" together with whatever is in between these two words. Output in this case"i like"In another case, if the user entered:"i like eating apples very much"Expected output:"i like very much"And I want to slice the input starting from "eating" to "apples"(However, the index cannot be used as you are unsure how long the user is going to type, but it is guaranteed that "eating" and "apples" will be entered)So, is there any way that we can slide without using the index, instead, we indicate the start and end of the slide with another string?
You can do the follwoing:s = "i like eating big apples"start_ = s.find("eating")end_ = s.find("apples") + len("apples")s[start_:end_] # 'eating big apples'Using find() to find the starting indices of the desired word in the string, and then adjust the start_/end_ to your needs.To remove the sub string:s[:start_] + s[end_:] # i likeAnd for:s = "i like eating apples very much"end_ = s.find("apples") + len("apples")start_ = s.find("eating")s[:start_] + s[end_:] # 'i like very much'
How do i get the innermost item inside a dictionary in python I loaded a file into a dictionary in python. Suppose it look something like this:{'dictionary': {'a': {'second_level': {'data': 'hello'}}, 'b': {'another_level': {'this_one_has_three_levels': {'data': 'hi'}}}}}Assuming that i don't know any of the keys except for the key "data" (therefore i can't do dictionary[a][data]) how can i write a function that gets the whole dictionary as an input and outputs something like this:["dictionary>a>second_level>data>'hello'", "dictionary>b>another_level>this_one_has_three_levels>data>'hi'"]this needs to save the whole "path" to the innermost element that are "hello" and "hi".I guess I need to have some sort of while loop here but I cant figure it out
A recursive approach is usually the simplest if nesting depth is not known in advance. Here is an example with a generator function:def get_data(dct, key): if key in dct: yield f"{key}>{dct[key]}" for k, v in dct.items(): if isinstance(v, dict): for s in get_data(v, key): yield f"{k}>{s}">>> list(get_data(d, "data"))['dictionary>a>second_level>data>hello', 'dictionary>b>another_level>this_one_has_three_levels>data>hi']
Replace specific words in python If I have a string "this is", and I want to replace "is" to "was". When I use replace ("is", "was"), I got "thwas was", but what i am expecting is "this was",is there any solution?
You need to do something more sophisticated than a regular string replace. I'd suggest using Regular Expressions (the re module), and using the \b escape sequence to match word boundaries:import rere.sub(r"\bis\b", "was", "This is the best island!")Result: 'This was the best island!'By using the pattern r"\bis\b" instead of just "is", you ensure that you only match "is" when it appears as a stand-alone word (i.e. there are no numbers, letters or underscore characters directly adjacent to it in the original string).Here's some more examples of what matches and what doesn't:re.sub(r"\bis\b", "was", "is? is_number? is, isn't 3is is, ,is, is. hyphen-is. &is")Result: "was? is_number? was, isn't 3is was, ,was, was. hyphen-was. &was"
Effective closures in python With Python 2.7x I'm attempting to create a Map object which can reference itself with a 'This' or 'self'. In Javascript this would be roughly, myObj = function(){ obj = {}; this = obj; obj = { 'a':'b', 'b':this.a }; return obj;}()But in Python you can't do multi-line lambda expressions. Scoping also doesn't behaviour the same as I expect. I can create a function on a separate line then call it, but this seems to lack pizzaz (especially since it isn't limited to being called only once).Is there an effective way to do this in Python?EDIT: Some people have been asking OH MY GOD WHY???? Well first of all, as an exercise. Second, you are failing to understand what I'm trying to do - I'm attempting to emulate a CLASS with a MAP. In a class in python you would say, var otherfunc = self.predefinedFunctionI want to be able to use self (this in some other languages), to reference the object. So in python I want to turn this:my_obj = { 'sqr':lambda x: x*x, 'quad': my_obj['sqr']}into this:my_obj = { 'sqr':lambda x: x*x, 'quad': this['sqr']}
If you're trying to implement a closure with a map, this would work finemymap = {a:1,b:"foo"} # all of your previously initialized and constant datamymap["self"] = mymapThen you can callmymap["self"]["b"]Here's how you can write code using thismymap["func"] = lambda x: return x*mymap["self"]["x"]This is admitted ugly, but you can't you have no way to refer to the map as anything but a global variable within a lambda expression. In particular, there's no good way to self reference. A better approach is to use an object, not a map.
Writing a p2p client/server app Possible Duplicate: How to write a twisted server that is also a client? How can I create a tcp client server app with twisted, where also the server can send requests, not just answer them? Sort of like a p2p app but where clients always initiate the connection. Since I don't know when the requests from the server will occur, I don't see how I can do this once the reactor is started.
The question you have to ask yourself is: why is the server sending a request?Presumably something has happened in the world that would prompt the server to send a request; it wouldn't just do it at random. Even if it did it at random, the thing that has happened in the world would be "some random amount of time has passed". In other words, callLater(random(...), doSomething).When you are writing a program with Twisted, you start off by setting up ways to react to events. Then you run the reactor - i.e. the "thing that reacts to events" - forever. At any time you can set up new ways to react to incoming network events (reactor.connectTCP, reactor.listenTCP, reactor.callLater) or tear down existing waiting things (protocol.loseConnection, port.stopListening, delayedCall.cancel). You don't need to re-start the reactor; in fact, really, the only thing you should do before the reactor runs is do reactor.callWhenRunning(someFunctionThatListensOrConnects), and write someFunctionThatListensOrConnects to do all your initial set-up. That set-up then happens once the reactor is already running, which demonstrates that you don't need to do anything in advance; the reactor is perfectly capable of changing its configuration as it runs.If the event that causes the server to send an event to client B the fact that client A sent it a message, then your question is answered by the FAQ, "how do I make input on one connection result in output on another?"
MongoAlchemy embedded documents Does anyone know how to create a model with an embedded document with mongo alchemy? I've search in the documentation, but there isn't any example about doing that.
have a look at:https://github.com/jeffjenkins/MongoAlchemy/blob/master/examples/examples.pyTheres a sample there, but for completeness, yes MongoAlchemy can use embedded documents like this:class Address(Document): street_address = StringField() city = StringField() state_province = StringField() country = StringField()class User(Document): name = StringField() email = StringField() address = DocumentField(Address)user = User()user.name = "tony"user.address = Address()user.address.city = "London"
Issue with the Python any() function I am trying to convert a list of lists into a single list with '1's if any of the elements in a single list is 1 and 0 otherwise. I have the following list:result =[[-1, -1, 0], [1, 0, -1], [1, 1, 0]]and if I use any() on the first listi.e. result[0], I get True even though it is supposed to be false. any(result[0]) gives True.However, when I test each element separately, I get False in each casei.e. result[0][0] == True gives False and similar result for the remaining elements. I can't seem to understand what I'm doing wrong.
any tests if any value is truthy, not if any value equals True. All non-zero integers are truthy so any([-1, -1, 0]) is True.See for details Truth Value Testing which applies to any and all as well.
Text display with matplotlib I'm having a problem with the plt.text() method in matplotlib and I am hoping someone can help me. Below is a basic linear regression example where I would like to display some text (slope = ) and the actual slope of the line on the graph:import csvimport scipy as spimport scipy.statsimport matplotlib.pyplot as pltx, y = sp.loadtxt('nums.csv', delimiter=',', usecols=(0,1), unpack=True)linear_reg = slope, intercept, r_value, p_value, std_err = sp.stats.linregress(x, y)plt.title('SO Regression Example')plt.text(2, 30, r'slope=', slope, fontsize=15)plt.plot(x, y)plt.show()The above code throws an AttributeError: 'numpy.float64' object has no attribute 'items'My code works fine if I remove either r'slope = ' or slope from line 9. For example both of these lines work just fine:plt.text(2, 30, slope, fontsize=15) # displays: 0.82785632403515463orplt.text(2, 30, r'slope =', fontsize=15) # displays: slopeDoes anyone know how I can make this plot display both items: (slope = 0.82785632403515463)Right now, I am using a hack by using two separate plt.text() lines and manually positioning the data:plt.text(2, 30, r'slope=', fontsize=15)plt.text(7, 30, slope, fontsize=15)There must be an easier way?
str='slope'+str(slope)plt.text(2, 30, str, fontsize=15)or just plt.text(2, 30, r'slope='+str(slope), fontsize=15)
Help with passing variables w/ csrfContext I have a login page, and in my view I pass it the csrfContext variable for the csrf_token tag. However, problems arise when I try to pass more than just that variable into the context. For example, if I use locals()return render_to_response('base_index.html', locals())I get a csrf error. For some reason it only works if I explicitly pass csrfContext, and only csrfContext. However, I also need to pass on other variables. How can I pass csrfContext and those variables together? Sorry if this is a convoluted question. My view code is:def index(request): current = Module.objects.all() error = "" try: error = request.GET["alert"] if error == "failure": error = "Woops! Something went wrong. Please try again." elif error == "invalid": error = "Invalid username/password." else: error = "Unknown Error. Please try again." except: pass csrfContext = RequestContext(request, error, current) return render_to_response('base_index.html', csrfContext)As you can see I've been experimenting with adding variables to the RequestContext, but I have no idea how to access them in the template.
I would not recommend using locals() in this way. In more complex views you may end up passing much more to the template rendering that is required.A better way to do this is to create the RequestContext, and either pass in the values you want to add, or add them after: https://docs.djangoproject.com/en/dev/ref/templates/api/#django.template.Context
Python: Waiting for a file to reach a size limit in a CPU friendly manner I am monitoring a file in Python and triggering an action when it reaches a certain size. Right now I am sleeping and polling but I'm sure there is a more elegant way to do this:POLLING_PERIOD = 10SIZE_LIMIT = 1 * 1024 * 1024while True: sleep(POLLING_PERIOD) if stat(file).st_size >= SIZE_LIMIT: # do somethingThe thing is, if I have a big POLLING_PERIOD, my file limit is not accurate if the file grows quickly, but if I have a small POLLING_PERIOD, I am wasting CPU.Thanks!How can I do this?Thanks!
Linux SolutionYou want to look at using pyinotify it is a Python binding for inotify.Here is an example on watching for close events, it isn't a big jump to listening for size changes.#!/usr/bin/env pythonimport os, sysfrom pyinotify import WatchManager, Notifier, ProcessEvent, EventsCodesdef Monitor(path): class PClose(ProcessEvent): def process_IN_CLOSE(self, event): f = event.name and os.path.join(event.path, event.name) or event.path print 'close event: ' + f wm = WatchManager() notifier = Notifier(wm, PClose()) wm.add_watch(path, EventsCodes.IN_CLOSE_WRITE|EventsCodes.IN_CLOSE_NOWRITE) try: while 1: notifier.process_events() if notifier.check_events(): notifier.read_events() except KeyboardInterrupt: notifier.stop() returnif __name__ == '__main__': try: path = sys.argv[1] except IndexError: print 'use: %s dir' % sys.argv[0] else: Monitor(path)Windows Solutionpywin32 has bindings for file system notifications for the Windows file system.What you want to look for is using FindFirstChangeNotification and tie into that and list for FILE_NOTIFY_CHANGE_SIZE. This example listens for File Name change it isn't a big leap to listen for size changes.import osimport win32fileimport win32eventimport win32conpath_to_watch = os.path.abspath (".")## FindFirstChangeNotification sets up a handle for watching# file changes. The first parameter is the path to be# watched; the second is a boolean indicating whether the# directories underneath the one specified are to be watched;# the third is a list of flags as to what kind of changes to# watch for. We're just looking at file additions / deletions.#change_handle = win32file.FindFirstChangeNotification ( path_to_watch, 0, win32con.FILE_NOTIFY_CHANGE_FILE_NAME)## Loop forever, listing any file changes. The WaitFor... will# time out every half a second allowing for keyboard interrupts# to terminate the loop.#try: old_path_contents = dict ([(f, None) for f in os.listdir (path_to_watch)]) while 1: result = win32event.WaitForSingleObject (change_handle, 500) # # If the WaitFor... returned because of a notification (as # opposed to timing out or some error) then look for the # changes in the directory contents. # if result == win32con.WAIT_OBJECT_0: new_path_contents = dict ([(f, None) for f in os.listdir (path_to_watch)]) added = [f for f in new_path_contents if not f in old_path_contents] deleted = [f for f in old_path_contents if not f in new_path_contents] if added: print "Added: ", ", ".join (added) if deleted: print "Deleted: ", ", ".join (deleted) old_path_contents = new_path_contents win32file.FindNextChangeNotification (change_handle)finally: win32file.FindCloseChangeNotification (change_handle)OSX SolutionThere is equivalent hooks into the OSX file system using PyKQueue as well, but if you can understand these examples you can Google for the OSX solution as well.Here is a good article about Cross Platform File System Monitoring.
Duplicate every value in array as a new array I have an numpy ndarray input like this[[T, T, T, F], [F, F, T, F]]and I want to duplicate every value as a new array, so the output would be[[[T,T], [T,T], [T,T], [F,F]] [[F,F], [F,F], [T,T], [F,F]]]How can I do this? Thank you in advance
One way would be using np.dstack to replicate the array along the third axis:np.dstack([a, a])array([[['T', 'T'], ['T', 'T'], ['T', 'T'], ['F', 'F']], [['F', 'F'], ['F', 'F'], ['T', 'T'], ['F', 'F']]], dtype='<U1')Setup:T = 'T'F = 'F'a = np.array([[T, T, T, F], [F, F, T, F] ])
How do I call a python script on the command line like I would call a common shell command, such as cp? Suppose I've got a script I can run with:python hello_world.py>>> "Hello, world!"How can I configure hello_world.py to be executable without 'python' or './':hello_world>>> "Hello, word!"EDIT: Thanks all for the suggestions! The shebang and path solution and the python package solution both worked. I checked the python package solution because I liked the added features of tacking the script onto my path via pip install -e . without hand-editing my path variable and catching the script during pip freeze > requirement.txt calls.
If you're on Linux or Unix, at the top your file, it's typically something like #!/bin/pythonor #!/usr/bin/pythonYou'll need execution perms to run the file as well, in that manner. Use chmodchmod +x hello_world.pyIf you're not running as root, you may require sudo, and that would besudo chmod +x hello_world.pyAnd then attempt to run with./hello_world.py if you must dispense with the ./:alias ./hello_world.py=hello_world
Tkinter, canvas, create_text and zooming Is it normal that Tkinter's Canvas' create_text 's font size doesn't change when I change the Canvas' scale with canvas.scale ?I thought that, as it is a high level GUI management system, I wouldn't have to resize manually the text done with create_text after a zooming. Isn't this strange, or am I wrong ?
It's normal, even if not entirely what you want. The scale method just changes the coordinate lists, but text items only have one of those so they just get (optionally) translated.This also applies to image and bitmap items. And features of other items like the line width; they're not scaled.
How can the x-axis dates be formatted without hh:mm:ss using matplotlib DateFormatter? I am pulling in data on Japanese GDP and graphing a stacked barchart overlayed with a line. I would like for the x-axis to have only yyyy-mm and no timestamp. I read about a compatability issue with pandas and matplotlib epochs. Is that the issue here? When I try to use matplotlib Dateformatter, the returned dates begin with 1970. How can I fix this?import pandas as pdimport pandas_datareader.data as webimport datetimeimport requestsimport investpyimport matplotlib.pyplot as pltimport matplotlib.dates as mdatesstart1 = '01/01/2013' #dd/mm/yyyyend1 = '22/04/2022'# Real GDP growth# Source: Cabinet Office http://www.esri.cao.go.jp/en/sna/sokuhou/sokuhou_top.html# Get the dataurl = 'https://www.esri.cao.go.jp/jp/sna/data/data_list/sokuhou/files/2021/qe214_2/tables/nritu-jk2142.csv'url2 = url.replace('nritu','nkiyo') # URL used for GDP growth by componenturl3 = url.replace('nritu-j', 'gaku-m')url4 = url.replace('nritu', 'gaku')url5 = url.replace('nritu', 'kgaku')df = pd.read_csv(url2, header=5, encoding='iso-8859-1').loc[49:]gdpkeep = { 'Unnamed: 0': 'date', 'GDP(Expenditure Approach)': 'GDP', 'PrivateConsumption': 'Consumption', 'PrivateResidentialInvestment': 'inv1', 'Private Non-Resi.Investment': 'inv2', 'Changein PrivateInventories': 'inv3', 'GovernmentConsumption': 'gov1', 'PublicInvestment': 'gov2', 'Changein PublicInventories': 'gov3', 'Goods & Services': 'Net Exports'}df = df[list(gdpkeep.keys())].dropna()df.columns = df.columns.to_series().map(gdpkeep)# Adjust the date column to make each value a consistent formatdts = df['date'].str.split('-').str[0].str.split('/ ')for dt in dts: if len(dt) == 1: dt.append(dt[0]) dt[0] = Nonedf['year'] = dts.str[0].fillna(method='ffill')df['month'] = dts.str[1].str.zfill(2)df['date2'] = df['year'].str.cat(df['month'], sep='-')df['date'] = pd.to_datetime(df['date2'], format='%Y-%m')# Sum up various types of investment and government spendingdf['Investment'] = df['inv1'] + df['inv2'] + df['inv3']df['Government Spending'] = df['gov1'] + df['gov2'] + df['gov3']df = df.set_index('date')[['GDP', 'Consumption', 'Investment', 'Government Spending', 'Net Exports']]df.to_csv('G:\\AutomaticDailyBackup\\Python\\MacroEconomics\\Japan\\Data\\gdp.csv', header=True) # csv file createdprint(df.tail(8))# Plotdf['Net Exports'] = df['Net Exports'].astype(float)ax = df[['Consumption', 'Investment', 'Government Spending', 'Net Exports']]['2013':].plot(label=df.columns, kind='bar', stacked=True, figsize=(10, 10))ax.plot(range(len(df['2013':])), df['GDP']['2013':], label='Real GDP', marker='o', linestyle='None', color='black')plt.title('Japan: Real GDP Growth')plt.legend(frameon=False, loc='upper left')ax.set_frame_on(False)ax.set_ylabel('Annual Percent Change')# dfmt = mdates.DateFormatter("%Y-%m") # proper formatting Year-month# ax.xaxis.set_major_formatter(dfmt)plt.savefig('G:\\AutomaticDailyBackup\\Python\\MacroEconomics\\Japan\\Data\\RealGDP.png')plt.show()```
Don't use DateFormatter as it is causing trouble, rather change format of the dataframe index using df.index = pd.to_datetime(df.index, format = '%m/%d/%Y').strftime('%Y-%m')Here is what I did with your gdp.csv fileimport matplotlib.pyplot as pltimport pandas as pdimport numpy as npimport matplotlib.dates as mdatesfrom matplotlib.dates import DateFormatterimport matplotlib.dates df=pd.read_csv("D:\python\gdp.csv").set_index('date')df.index = pd.to_datetime(df.index, format = '%m/%d/%Y').strftime('%Y-%m')# Plotfig, ax = plt.subplots()df['Net Exports'] = df['Net Exports'].astype(float)ax = df[['Consumption', 'Investment', 'Government Spending', 'Net Exports']]['2013':].plot(label=df.columns, kind='bar', stacked=True, figsize=(10, 10))ax.plot(range(len(df['2013':])), df['GDP']['2013':], label='Real GDP', marker='o', linestyle='None', color='black')plt.legend(frameon=False, loc='upper left')ax.set_frame_on(False)plt.savefig(r'D:\python\RealGDP.png')plt.show()
How to not overwrite a value of a dictionary when taking and storing a user's input? (Python) I'm a python beginner tried making a contact book/address book program. I take a user input/contact info(value) and store it as a value to a dictionary, however while the program still running if I try to enter a new contact info(new value) it will overwrite the existing one, how do I solve this? so I can add as many contact info(value) as I want while the program still running.Thanks in advance.this is my code:def head(): print("") print("========================") print(" Contact Book ") print("========================")def restart(): response = input("\nOpen menu again? (yes/no): ").lower() if response == "yes": task() else: print("\nSee You next time!")def task(): head() done = False print('''1. Add Contact2. Search3. View Contact List4. Delete All Contact5. Exit''') while not done: task = input("\nWhat do You want to do? (1-5):") if task == "1": print("\nAdding a new contact!") global cnt_lst global new cnt_lst = {} new = {} new['new_key'] = {} new['new_key']['Name '] = input("Name: ") new['new_key']['Phone'] = input("Phone: ") if not new['new_key']['Phone'].isnumeric(): while not new['new_key']['Phone'].isnumeric(): print("Invalid input, please enter only a number!") new['new_key']['Phone'] = input("Phone: ") new['new_key']['Email'] = input("Email: ") cnt_lst.update(new) print("\nContact is saved!") done = True restart() elif task == "2": search = input("\nSearch: ") info = False for key, value in cnt_lst.items(): for npe, val in value.items(): if search in val: info = True print("=" * 20) for npe, val in value.items(): print(npe,' : ',val) break if not info: print("\nNo info was found!") done = True restart() elif task == "3": if 'cnt_lst' not in globals(): print("\nNo contact info available!") elif 'cnt_lst' in globals() and len(cnt_lst) == 0: print("\nNo contact info available!") else: print("\nAll Contact Info") for key, value in cnt_lst.items(): for npe, val in value.items(): print("=" * 20) for npe, val in value.items(): print(npe,' : ',val) break done = True restart() elif task == "4": cnt_lst.clear() print("\nSuccesfully deleted all contact info!") done = True restart() elif task == "5": print("See You next time!") break else: print("Invalid input please enter a single number from 1 to 5") restart() task()
I think your problem is that every time your user press 1 to input a new contact your code goes like this:print("\nAdding a new contact!") global cnt_lst global new cnt_lst = {} # <- here you are erasing your dictionary new = {}You should store your dictionary for example as a text file or XML or JSON so all the contacts are stored localy and then at the start of your program you can read its contents for printing or adding new entries
Parallel Document Conversion ODT > PDF Libreoffice I am converting hundreds of ODT files to PDF files, and it takes a long time doing one after the other. I have a CPU with multiple cores. Is it possible to use bash or python to write a script to do these in parallel?Is there a way to parallelize (not sure if I'm using the right word) batch document conversion using libreoffice from the command line?I have been doing it in python/bash calling the following commands:libreoffice --headless --convert-to pdf *appsmergeme.odtORsubprocess.call(str('cd $HOME; libreoffice --headless --convert-to pdf *appsmergeme.odt'), shell=True);Thank you!Tim
You can run libreoffice as a daemon/service. Please check the following link, maybe it helps you too: Daemonize the LibreOffice serviceOther posibility is to use unoconv. "unoconv is a command line utility that can convert any file format that OpenOffice can import, to any file format that OpenOffice is capable of exporting."
PyDev can't recognize all module members correctly I have two examples:As you can see PyDev marks Process in first example and PULL in second as "Undefined variable from import (...)". However, code is executed without any problems. It's just PyDev can't resolve those names.Taking a closer look on multiprocessing and zmq modules I found that members that can't be recognized are imported in some weird way by updating globals.Is there a way to make PyDev evaluate those import files more thoroughly?
Yes, you can ask PyDev to analyze modules through a shell.See: http://pydev.org/manual_101_interpreter.html for more details (mostly the forced builtins part).
Jinja2 does not render blocks I am going through a flask tutorial and want to create a blog using flask. For this purpose I took some code from the tutorial and wrote some code myself. The problem is, that the Jinja2 templating engine only seems to render some of the blocks I declared in the templates and I don't know why.This is what I got so far:base.html:<html><head> {% if title %} <title>{{ title }} - microblog</title> {% else %} <title>Welcome to microblog</title> {% endif %}</head><body> {% with messages = get_flashed_messages() %} {% if messages %} <ul> {% for message in messages %} <li>{{ message }}</li> {% endfor %} </ul> {% endif %} {% endwith %} <p>---</p> {% block header %}{% endblock %} <p>---</p> {% block navigation %}{% endblock %} <!-- other templates can insert themselves as block --> <!-- the following block ist called 'content' --> {% block content %}{% endblock %}</body>now there are the blocks, which extend the base.html:index.html:{% extends "base.html" %}{% block content %}<h1>Hi, {{ users.nickname }}!</h1>{% for post in posts %}<div><p>{{ post.author.nickname }} says: <b>{{ post.body }}</b></p></div>{% endfor %}{% endblock %}header.html:{% extends "base.html" %}{% block header %}<!-- this is the header block --><h1>Microblog!</h1>{% endblock %}navigation.html (copied from another css dropdown menu tutorial):{% extends "base.html" %}{% block navigation %}<nav id="nav"><ul id="navigation"> <li><a href="#" class="first">Home</a></li> <li><a href="#">Services »</a> <ul> <li><a href="#">Web Development</a></li> <li><a href="#">Logo Design</a></li> <li><a href="#">Identity & Branding »</a> <ul> <li><a href="#">Business Cards</a></li> <li><a href="#">Brochures</a></li> <li><a href="#">Envelopes</a></li> <li><a href="#">Flyers</a></li> </ul> </li> <li><a href="#">Wordpress</a></li> </ul> </li> <li><a href="#">Portfolio »</a> <ul> <li><a href="#">Graphic Design</a></li> <li><a href="#">Photography</a></li> <li><a href="#">Architecture</a></li> <li><a href="#">Calligraphy</a></li> <li><a href="#">Film »</a> <ul> <li><a href="#">John Carter</a></li> <li><a href="#">The Avengers</a></li> <li><a href="#">The Amazing SpiderMan</a></li> <li><a href="#">Madagascar 3</a></li> </ul> </li> <li><a href="#">Graffity </a></li> </ul> </li> <li><a href="#">Testimonials</a></li> <li><a href="#">Blog</a></li> <li><a href="#" class="last">Contact</a></li></ul></nav>{% endblock %}However, the resulting source code in the browser is:<html><head> <title>Home - microblog</title></head><body> <p>---</p> <p>---</p> <!-- other templates can insert themselves as block --> <!-- the following block ist called 'content' --><h1>Hi, Miguel!</h1><div><p>John says: <b>Beautiful day in Portland!</b></p></div><div><p>Susan says: <b>The Avengers movie was so cool!</b></p></div><div><p>Xiaolong says: <b>Crouching Tiger Hidden Dragon, one of my favorites …</b></p></div></body></html>My views.py is this:from flask import render_template, flash, redirectfrom app import appfrom .forms import [email protected]('/')@app.route('/index')def index():users = {'nickname': 'Miguel'} # fake userposts = [ # fake array of posts { 'author': {'nickname': 'John'}, 'body': 'Beautiful day in Portland!' }, { 'author': {'nickname': 'Susan'}, 'body': 'The Avengers movie was so cool!' }, { 'author': {'nickname': 'Xiaolong'}, 'body': 'Crouching Tiger Hidden Dragon, one of my favorites …' }]# Flask uses the jinja templating engine internally, which fills in the variables into the template.# 1.arg: name of the template file in the templates folder# 2. - x. arg: values for the variables we want to see in the rendered pagereturn render_template( 'index.html', title='Home', users=users, posts=posts)What am I doing wrong? Why is only the block content from index.html rendered?EDIT#1: Clarification:The expected result was Jinja rendering all mentioned blocks in the template and the base template and all others which are mentioned in the ones Jinja sees on it's way through the templates/blocks.EDIT#2: Putting blocks into index.html:Even if put the blocks into the index.html it does not render them:{% extends "base.html" %}{% block content %}<p>---</p>{% block header %}{% endblock %}<p>---</p>{% block navigation %}{% endblock %}<h1>Hi, {{ users.nickname }}!</h1>{% for post in posts %}<div><p>{{ post.author.nickname }} says: <b>{{ post.body }}</b></p></div>{% endfor %}{% endblock %}
You are implementing each block in a different html file, but you render index.html. What Jinja2 does when you tell it to render index.html is grab the base template (base.html) and look at what modification index.html brings - in your case, updating the content block.Jinja2 won't even look at the other block implementations (in other words the other html files) in this case. What you want is to implement title/navigation/etc. in base.html itself. The templating engine only looks at the inheritance chain of the template you are currently rendering, it doesn't load all existing templates for each render operation.EDIT: Discussed in comments but updating here in case some else runs into this: if you want to render different pieces from different files, use the {%include <template> %} directive, not blocks.
How can I update a plot with periodic information coming through a serial port (every ~100-200 msec)? Do I need an RTOS? I'm sending positioning information (x,y,z coordinates in double data type) from a microcontroller via a Serial Communication Interface (SCI). I would like to use a program to receive this information and update this coordinate every time I receive a new coordinate, preferably in Python. What is a good library to do this? Do I need a Real Time Operating System (and why?)? Should I use a different communication interface (and why?)?
No, an RTOS is not necessary. Any modern system should very easily be able to handle reading serial data every 100ms.Just get started reading from the serial port, and processing your data.pySerial
Unable to replicate gdal output I have a set of GRIB files that are fliped (longitude spans from 0 to 365), and I am using gdal to first transform the data to GeoTIFF, and then warp the gridded data to a standard WGS84 longitude (-180 to 180). So far, I have been using a combination of gdal_translate and gdalwarp from the command line, and using parallel to go through all the files fast. These are the functions in my bash script:gdal_multiband_transform(){ FILEPATH=$1 SAVEPATH=$2 NUM_BANDS=$(gdalinfo $FILEPATH | grep 'Band' | wc -l) if [[ $NUM_BANDS -eq 1 ]] then echo "Extracting 1 band from $FILEPATH" gdal_translate -of GTiff -b 1 $FILEPATH $SAVEPATH else echo "Extracting 2 bands from $FILEPATH" gdal_translate -of GTiff -b 1 -b 2 $FILEPATH $SAVEPATH fi}warp_raster(){ echo "Rewarp all rasters in $PATH_TO_GRIB" find $PATH_TO_GRIB -type f -name '*.tif' | parallel -j 5 -- gdalwarp -t_srs WGS84 {} {.}_warp.tif \ -wo SOURCE_EXTRA=1000 --config CENTER_LONG 0 -overwrite}warp_rasterNow, I wanted to replicate this same behavior in Python using the osgeo library. I skipped the translation part since I realize osgeo.gdal can warp the GRIB file directly instead of having to convert/translate to a GeoTIFF format. For that I used the following Python code:from osgeo import gdalOPTS = gdal.WarpOptions(dstSRS='WGS84', warpOptions=['SOURCE_EXTRA=1000'], options=['CENTER_LONG 0']) try: ds = gdal.Open(filename)except RuntimeError: ds = gdal.Open(str(filename))if os.path.getsize(filename): ds_transform = gdal.Warp(file_temp_path, ds, options=OPTS) # is this a hack? ds_transform = Noneelse: print(f'{filename} is an empty file. No GDAL transform')Where I define the same option from my bash script using the gdal.WarpOptions. The outcome is visually the same; the code achieves the main goal: warp the longitude between -180 and 180. But, when I take local statistics the differences are wild. Just a mean of the whole gridded data has a difference of 4 celsius (is surface temperature data). There's any GDAL option I am missing in osgeo that is generating these differences? I do not want to use a bash script since I am looking for an only-Python implementation.
First option, get GDAL 3.4 where this problem is solved, GRIBs get automagically transformed from 0-360 to -180-180 when being converted from GRIB to GeoTIFFSecond option, use geosub available from NPM (npm -g install geosub) to download NOAA's GRIBs if this is what you are using, it can do this for youThird option, use gdalwarp --config CENTER_LONG 0 which has been there since the early days(Disclaimer: I am the author of the GRIB 0-360 translation in GDAL and the geosub package)
An optimized matrix multiplication library in Python (similar to Matlab) but is NOT numpy According to the NumPy documentation they may deprecate their np.matrix class. And while arrays do have their multitude of use cases, they cannot do everything. Specifically, they will "break" when doing pretty basic linear algebra operations (you can read more about it here).Building my own matrix multiplication module in python is not too difficult, but it would not be optimized at all. I am looking for another library that has full linear algebra support which is optimized upon BLAS (Basic Linear Algebra Subprograms). Or at the least, is there any documents on how to DIY integrate a BLAS to python.Edit: So some are suggesting the @ operator, which is like pushing a mole down a hole and having him pop up immediately in the neighbouring one. In essence, what is happening is a debuggers nightmare:W*x == w*x.TW@x == [email protected] would hope that an error is raised here letting you know that you made a mistake in defining your matrices. But since arrays don't store 2D information if they are along one axis, I am not sure that the issue can ever be solved via np.array. (These problems don't exist with np.matrix but for some reason the developers seem insistent on removing it).
Actually, numpy offers BLAS-powered matrix mutiplication through the matmul operator @. This invokes the __matmul__ magic method for a given class.All you have to do in the above example is W @ x.Other linear algebra stuff can be found on the np.linalg module.Edit: I guess your problem is way more about the language's style than any technical issues. I found this answer very elucidative:Transposing a NumPy arrayAlso, I find it very improbable that you will find something that is NOT numpy since most of the major machine learning/data science frameworks rely on it.
How to transform output of neural network and still train? I have a neural network which outputs output. I want to transform output before the loss and backpropogation happen.Here is my general code:with torch.set_grad_enabled(training): outputs = net(x_batch[:, 0], x_batch[:, 1]) # the prediction of the NN # My issue is here: outputs = transform_torch(outputs) loss = my_loss(outputs, y_batch) if training: scheduler.step() loss.backward() optimizer.step()I have a transformation function which I put my output through:def transform_torch(predictions): torch_dimensions = predictions.size() torch_grad = predictions.grad_fn cuda0 = torch.device('cuda:0') new_tensor = torch.ones(torch_dimensions, dtype=torch.float64, device=cuda0, requires_grad=True) for i in range(int(len(predictions))): a = predictions[i] # with torch.no_grad(): # Note: no training happens if this line is kept in new_tensor[i] = torch.flip(torch.cumsum(torch.flip(a, dims = [0]), dim = 0), dims = [0]) return new_tensorMy problem is that I get an error on the next to last line:RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.Any suggestions? I have already tried using "with torch.no_grad():" (commented), but this results in very poor training and I believe that the gradients don't backpropogate properly after the transformation function.Thanks!
The error is quite correct about what the issue is - when you create a new tensor with requires_grad = True, you create a leaf node in the graph (just like parameters of a model) and not allowed to do in-place operation on it.The solution is simple, you do not need to create the new_tensor in advance. It is not supposed to be a leaf node; just create it on the flynew_tensor = [ ]for i in range(int(len(predictions))): a = predictions[i] new_tensor.append(torch.flip(torch.cumsum(torch.flip(a, ...), ...), ...))new_tensor = torch.stack(new_tensor, 0) This new_tensor will inherit all properties like dtype, device from predictions and will have require_grad = True already.
Return specific functions of a class from another function I have 2 classes, A and Bclass A: def name(self): return B(name=self)class B: def __init__(self, name): self.name = name def hi(self): return "Hi!" + self.name def bye(self): return "Bye!" + self.nameprint(A.name('Robert').hi())print(A.name('Robert').bye()) # I don't want this :(This prints out Hi! Robert and Bye! Robert, however, class A has access to bye() (which I do not want). Is there a way to limit the functions of B which A.name can access?
That would contradict Python's typing system.In statically typed languages, not only does a data object in the memory has a type, but also the variable which references it, and the two types do not have to be identical (though they have to match). The class members accessible through the referencing variable depend usually on the variable's type, not the object's type.Python is a dynamically typed language. Referencing variables do not have types, they are just labels pointing at the data objects. And since, in your case, A's name method returns an object of type B, this object obviously has access to all of B's class members. It does not know that it was originally created by A, it's just an ordinary B instance.
Scrapy crawler: Unable to store multiple urls into postgres I have created a crawler using scrapy python.I want to store multiple urls fetched by the crawler into the postgres table.When i start the crawler the urls are fetched and table gets created into the postgres but the data is not getting stored.Technology used: Scrapy,PythonOutput to be: The urls should get stored inside the postgres table.Error: I am unable to store all the urls.The crawler is not working for all the websites.Please Help!!!import scrapyimport osimport psycopg2conn = psycopg2.connect( database="postgres", user='postgres', password='password', host='127.0.0.1', port= '5432')print("connected")conn.autocommit = Truecur=conn.cursor()cur.execute("""CREATE TABLE IF NOT EXISTS tmp_crawler(WEBSITE VARCHAR(500) NOT NULL)""")class MySpider(scrapy.Spider): name = 'feed_exporter_test' allowed_domains=['google.com'] start_urls = ['https://www.google.com//'] def parse(self, response): urls = response.xpath("//a/@href").extract() for url in urls: abs_url = response.urljoin(url) var1 = "INSERT INTO tmp_crawler(website) VALUES('" + url + "')" cur.execute(var1) conn.commit() yield {'title': abs_url}
You can use scrapy ITEM_PIPELINES to achieve this. See sample implementation belowimport scrapyimport psycopg2class DBPipeline(object): def open_spider(self, spider): # connect to database try: self.conn = psycopg2.connect(database = "postgres", user = "postgres", password = "password", host = "127.0.0.1", port = "5432") self.conn.autocommit = True self.cur = self.conn.cursor() except: spider.logger.error("Unable to connect to database") # create the table try: self.cur.execute("CREATE TABLE IF NOT EXISTS tmp_crawler (website VARCHAR(500) NOT NULL);") except: spider.logger.error("Error creating table `tmp_crawler`") def process_item(self, item, spider): try: self.cur.execute('INSERT INTO tmp_crawler (website) VALUES (%s)', (item.get('title'),)) spider.logger.info("Item inserted to database") except Exception as e: spider.logger.error(f"Error `{e}` while inserting item <{item.get('title')}") return item def close_spider(self, spider): self.cur.close() self.conn.close()class MySpider(scrapy.Spider): name = 'feed_exporter_test' allowed_domains=['google.com'] start_urls = ['https://www.google.com/'] custom_settings = { 'ITEM_PIPELINES': { DBPipeline: 500 } } def parse(self, response): urls = response.xpath("//a/@href").extract() for url in urls: yield {'title': response.urljoin(url)}
How to get the first item in a group by that meets a certain condition in pandas? I have the following code:grouped_stats = stats.groupby( stats.last_mv.ne( stats.last_mv.shift()).cumsum() )last_mv is a decimal valueIn the code above I am grouping by consecutive valuesI am trying two ways to obtain the first value that is 0.25% above the first item in the groups last_mv value. In other words, I have grouped by consecutive last_mv values, I want to select the first of each group, multiply by 1.025 and then find the first value within the group that matches this value (if one exists)I tried:grouped_stats.filter(lambda x: x.last_mv >= (x.first().last_mv * 1.025))but I can't access the first row in the group with .first() as I assumed I wouldntI also tried grouped_stats.loc[ grouped_stats.last_mv >= (grouped_stats.first().last_mv * 1.025) ]but I get the error: "Cannot access callable attribute 'loc' of 'DataFrameGroupBy' objects, try using the 'apply' method"
I believe you need transform for Series with same size like original DataFrame filled by first values per groups:stats[ stats.last_mv >= (grouped_stats.last_mv.transform('first') * 1.025) ]
How can I replace a substring in a Python pathlib.Path? Is there an easy way to replace a substring within a pathlib.Path object in Python? The pathlib module is nicer in many ways than storing a path as a str and using os.path, glob.glob etc, which are built in to pathlib. But I often use files that follow a pattern, and often replace substrings in a path to access other files:data/demo_img.pngdata/demo_img_processed.pngdata/demo_spreadsheet.csvPreviously I could do:img_file_path = "data/demo_img.png"proc_img_file_path = img_file_path.replace("_img.png", "_img_proc.png")data_file_path = img_file_path.replace("_img.png", "_spreadsheet.csv")pathlib can replace the file extension with the with_suffix() method, but only accepts extensions as valid suffixes. The workarounds are:import pathlibimport osimg_file_path = pathlib.Path("data/demo_img.png")proc_img_file_path = pathlib.Path(str(img_file_path).replace("_img.png", "_img_proc.png"))# os.fspath() is available in Python 3.6+ and is apparently safer than str()data_file_path = pathlib.Path(os.fspath(img_file_path).replace("_img.png", "_img_proc.png"))Converting to a string to do the replacement and reconverting to a Path object seems laborious. Assume that I never have a copy of the string form of img_file_path, and have to convert the type as needed.
You are correct. To replace old with new in Path p, you need:p = Path(str(p).replace(old, new))EDITWe turn Path p into str so we get this str method:Help on method_descriptor:replace(self, old, new, count=-1, /)Return a copy with all occurrences of substring old replaced by new.Otherwise we'd get this Path method:Help on function replace in module pathlib:replace(self, target)Rename this path to the given path, clobbering the existing destination if it exists, and return a new Path instance pointing to the given path.
how to merge array of images which are generated by for loop I have a set of 5 images and I must resize them all in (16,16) dimension. Then, I have to print each image as a column vector.For this, I use a for loop to resize all the images but I can't merge them in an array. What should I do if I want to print 5 column matrix of 5 images side by side as a (256*5) dimension matrix ?Next, I provide the code I have done so far:import cv2import numpy as npimport globimport itertoolsimport xlsxwriterfolder="E:/DOCUMENT(M.TECHS)/New folder/word/*.png"files = list(glob.glob (folder))i=0for i in files: abc=cv2.imread(i,0) d=(16,16) abc1=cv2.resize(abc,d,interpolation=cv2.INTER_AREA) r,c=abc1.shape width, height = abc1.shape arr = np.ravel(abc1) print(arr)
Try appending all images to an array then join them using np.concatenate(.., axis=1). For example, change as follows: imgs = []for i in files: abc=cv2.imread(i,0) d=(16,16) abc1=cv2.resize(abc,d,interpolation=cv2.INTER_AREA) r,c=abc1.shape width, height = abc1.shape arr = np.ravel(abc1) imgs.append(arr) final_img = np.concatenate(imgs, axis=1)
Google speech recognition API credentials error I am trying to integrate a the google speech recognition API, but i keel getting an error saying ApplicationDefaultCredentialsError. I've been searching and I keep seeing something like : set GOOGLE_APPLICATION_CREDENTIALS=[PATH], but I don't know where actually to type that in the terminal and where to save the .json file. Is there a way I can correct this? import argparseimport base64import jsonfrom googleapiclient import discoveryimport httplib2from oauth2client.client import GoogleCredentialsDISCOVERY_URL = ('https://{api}.googleapis.com/$discovery/rest?' 'version={apiVersion}')def get_speech_service(): credentials = GoogleCredentials.get_application_default().create_scoped( ['https://www.googleapis.com/auth/cloud-platform']) http = httplib2.Http() credentials.authorize(http) return discovery.build( 'speech', 'v1beta1', http=http, discoveryServiceUrl=DISCOVERY_URL)def main(speech_file): """Transcribe the given audio file. Args: speech_file: the name of the audio file. """ with open(speech_file, 'rb') as speech: speech_content = base64.b64encode(speech.read()) service = get_speech_service() service_request = service.speech().syncrecognize( body={ 'config': { 'encoding': 'LINEAR16', # raw 16-bit signed LE samples 'sampleRate': 16000, # 16 khz 'languageCode': 'en-US', # a BCP-47 language tag }, 'audio': { 'content': speech_content.decode('UTF-8') } }) response = service_request.execute() print(json.dumps(response))if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument( 'speech_file', help='Full path of audio file to be recognized') args = parser.parse_args() main(args.speech_file)
You will need to export the credentials to the environment. For mac -export GOOGLE_APPLICATION_CREDENTIALS="[PATH]"For Windows via PowerShell-$env:GOOGLE_APPLICATION_CREDENTIALS="C:\Users\username\Downloads\[FILE_NAME].json"Where [PATH] is path to the Json file containing credentials. Here's a link for setting up authentication for server to server production applications
Importing libraries in Python I'm new to Python, so this question might be easy. But I tried to search on the internet and didn't reach an explanation.I'm trying to imitate a simple script dealing with xml where it imports the followingfrom xml.etree import ElementTreefrom xml.etree.ElementTree import Elementfrom xml.etree.ElementTree import SubElementQuestion: Why all this? Why can't I just say import xml.etree and others are just like ancestors. Or even only import xml. I've tried this but it's not working. Why?
You should be able to do:from xml.etree.ElementTree import *However, this is bad form since you could have conflicting names from different package imports. It's always best to specify the exact classes you want and alias them as needed--class name like Element one would assume could be very common and you could get collisions. I'd recommend basically stick your original: from xml.etree import ElementTree from xml.etree.ElementTree import Element, SubElementI think the confusion you're hitting is you have a mix of classes named after files that also have other classes in them. The first will import the class ElementTree, from the file ElementTree, this is convention. The second line will import the classes Element and SubElement which also happen to be inside the file ElementTree.This will work too and is more succinct: from xml.etree.ElementTree import ElementTree, SubElement, Element
Read and parse a tab delimited file PHP I worked out this Python script to read a tab delimited file and place the values where the line starts with '\t' in a array. The code which I used for this: import sysfrom collections import OrderedDictimport jsonimport os file = sys.argv[1]f = open(file, 'r')direc = '/dir/to/JSONs/'fileJSON = sys.argv[1]+'.json'key1 = OrderedDict()summary_data = []full_path = os.path.join(direc,fileJSON)Read = True for line in f: if line.startswith("#"): Read = True elif line.startswith('\tC'): Read= True elif line.startswith('\t') and Read == True: summary = line.strip().split('\t') key1[summary[1]]=int(summary[0]) Read = True summary_data.append(key1)data = json.dumps(summary_data)with open(full_path, 'w') as datafile: datafile.write(data)print(data)The data which I am parsing: # BUSCO was run in mode: genome C:98.0%[S:97.0%,D:1.0%],F:0.5%,M:1.5%,n:1440 1411 Complete BUSCOs (C) 1397 Complete and single-copy BUSCOs (S) 14 Complete and duplicated BUSCOs (D) 7 Fragmented BUSCOs (F) 22 Missing BUSCOs (M) 1440 Total BUSCO groups searchedBut, I need this code in PHP.. I have managed to open the file in PHP and to read this! Could someone please help me out?
I didn't get the point of Read variable - it is always True in your code, the last 'elif' statement would be enough. Below is php version of your script<?php $fileName = $argv[1]; $dir = '/dir/to/JSONs/'; $fullPath = $dir . $fileName . '.json'; $data = []; $output = fopen($fileName, 'r'); while (($line = fgets($output)) !== false) { if ($line[0] == "\t") { $summary = explode("\t", trim($line)); if (count($summary) > 1) { $data[$summary[1]] = (int)$summary[0]; } } } $strData = json_encode([$data]); $input = fopen($fullPath, 'w+'); fwrite($input, $strData); echo $strData;
django aggregate and filter I'm going to convert this sql to django commands:SELECT core.id, core.title, core.age_id, core.cat_id, max(date) AS max_dateFROM coreWHERE core.state = 'ABC'GROUP BY cat_id, age_idI tried this, but not works correctly:Core.objects.values('id', 'title', 'age_id', 'cat_id').filter(state='ABC').annotate( max_date=Max('date')).aggregate(Count('age_id', 'cat_id'))
You have to do this with values() but restrict fields that you want to group byCore.objects.values('age_id', 'cat_id').filter(state='ABC').annotate(Max('date'), Count('age_id', 'cat_id'))
Python Selenium Element does not exist Been struggling to click a certain nested li in a ul. Each attempt throws an error. Im trying to use xpath however any approach would be welcome. Also take into consideration that there is some extra text after the span tag. Script: driver.find_element_by_xpath("//label[text()=contains(span,'Tomorrow')]").click()HTML <ul class="slide-flow"> <li class="slide-flow__item">....</li> <li class="slide-flow__item"> <div> <label class="radio" for="pack 2" data-track="2nd track"> <div class="slide-flow__menu menu_brand"> <strong> <span>Tomorrow</span> </strong> -Second Notataion </label> <span class="slide-flow__price">...</span> </div> </li> <li class="slide-flow__item"> ....</li> </ul>
The spelling of "Tomorrow" is different in the XPath and HTML.In contains() function, try contains(., 'Tommorow'). Replace span with ..Correct XPath will be (Check the spelling of "Tomorrow")//label[text()=contains(.,'Tommorow')]
Python - Getting A Page's Complete HTML Via Url / Request ERROR I'm trying to get the html of this page: url = 'http://www.metacritic.com/movie/oslo-august-31st/critic-reviews'and I'm trying to get it using requests: oslo = requests.get(url)but they seem to know that I'm accessing it this way and when I open up the file I get:\n\n\n403 Forbidden\n\n\nError 403 Forbidden\nForbidden\nGuru Meditation:\nXID: 961167012\n\nVarnish cache server\n\n\nIs there any other way to access the html's other than manually copying and pasting every html from every page?
You need to specify a User-Agent header to get 200 response:import requestsurl = 'http://www.metacritic.com/movie/oslo-august-31st/critic-reviews'response = requests.get(url, headers={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36'})print(response.status_code)
how do i overwrite to a specific part of a text file in python For example if in my text file i have,Explosion,Bomb,DuckJim,Sam,Danieland i wanted to change the Daniel in that file, so that nothing else would be affected. How would I achieve this without overwriting the whole file.
You can use fileinputimport fileinputwith fileinput.FileInput(fileToSearch, inplace=True, backup='.bak') as file: for line in file: print(line.replace(textToSearch, textToReplace), end='') #testToSearch:- Daniel #textToReplace:- newNameOr if you want to keep it more simple, just do the operation in reading from 1 file and writing the replaced content in a second file. And then overrite it!f1 = open('orgFile.txt', 'r')f2 = open('orgFileRep.txt', 'w')for line in f1: f2.write(line.replace('textToSearch', 'textToReplace'), end=' ')f1.close()f2.close()
Python 3.4 - Text to Speech with SAPI I was trying to use this code to convert text to speech with Python 3.4, but since my computer's main language is not English (I'm using Win7x64) the voice and the accent are wrong (Because I want it to "speak" English).import win32com.clientspeaker = win32com.client.Dispatch("SAPI.SpVoice")speaker.Speak("Hello, it works!")so, is there a way to change the voice/language (of the program, not the system)?Also, do you think there is a better way to do this? Perhaps a module that can work on every sytem?
Chances are that your OS only came with one voice as it is. There are several ways you can get English sounding output using IPA (International Phonetic Language) and SVSFIsXML as a flag in your speak call... but I'm guessing you'd want something less complicated than that.The first thing I'd do is grab an English voice if you don't have one already. (Check first by going into your control panel->speech recognition->text to speech and look at your voice selection. If it says "Microsoft Anna - English (United States)" then, yes you already have an English voice.)If not you'll have to grab another voice Microsoft Speech Platform - Runtime Languages (Version 11) . I highly recommend Microsoft Server Speech Text to Speech Voice (en-US, ZiraPro) as an English voice. You'll also want Microsoft Speech Platform - Software Development Kit (SDK) (Version 11).Honestly, I just kind of install them all because I think it's cool.Once you've got those all installed, what I've found to get the voices working is a neat registry hack I found at Voice Attack - Getting free alternate TTS voices working with Win7/8 64bit.Basically what this entails is that you do some string replacement in your MS Speech Platform voices in your registry so that what you see in yourHKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Speech Server\v11.0\VoicesHKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Speech Server\v11.0\Voicesregistries will wind up in:HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Speech\VoicesHKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Speech\VoicesOnce that's done, go back to control panel and look at all the voices you installed. You should be able to test them all, even in different languages. If the voices aren't playing then the voices you installed weren't the right bit (x86 vs 64).Now in python you'll have to make a SetVoice call. I've never in my life programmed in python, but I imagine the call you'd want would look something like speaker.SetVoice("Microsoft Server Speech Text to Speech Voice (en-US, ZiraPro)"). After you set the voice, that voice should be the one speaking all the time when you make a Speak call.Now if you have gotten to this point and the voices played in the control panel but not in your code, it could be that your program is 32bit/64bit or something, and then you gotta run back, reinstall the opposite 32bit/64bit voices, run your reg edits again, and try running your application again.A bit of work, but it'll pay off. If you do distribute your code, you'll have to make sure your voices are part of the client's registry, and messing with that can be a headache in itself.
How to speed quicksort with numba? I am trying to implement the quicksort algorithm using numba in Python.It appears to be a lot slower than the numpy sort function.How could I improve it? My code is here:import numba as [email protected] quick_sort(list_): """ Iterative version of quick sort """ #temp_stack = [] #temp_stack.append((left,right)) max_depth = 1000 left = 0 right = list_.shape[0]-1 i_stack_pos = 0 a_temp_stack = np.ndarray( ( max_depth, 2), dtype=np.int32 ) a_temp_stack[i_stack_pos,0] = left a_temp_stack[i_stack_pos,1] = right i_stack_pos+=1 #Main loop to pop and push items until stack is empty while i_stack_pos>0: i_stack_pos-=1 right = a_temp_stack[ i_stack_pos, 1 ] left = a_temp_stack[ i_stack_pos, 0 ] piv = partition(list_,left,right) #If items in the left of the pivot push them to the stack if piv-1 > left: #temp_stack.append((left,piv-1)) a_temp_stack[ i_stack_pos, 0 ] = left a_temp_stack[ i_stack_pos, 1 ] = piv-1 i_stack_pos+=1 #If items in the right of the pivot push them to the stack if piv+1 < right: a_temp_stack[ i_stack_pos, 0 ] = piv+1 a_temp_stack[ i_stack_pos, 1 ] = right [email protected]( nopython=True )def partition(list_, left, right): """ Partition method """ #Pivot first element in the array piv = list_[left] i = left + 1 j = right while 1: while i <= j and list_[i] <= piv: i +=1 while j >= i and list_[j] >= piv: j -=1 if j <= i: break #Exchange items list_[i], list_[j] = list_[j], list_[i] #Exchange pivot to the right position list_[left], list_[j] = list_[j], list_[left] return jMy test code is here: x = np.random.random_integers(0,1000,1000000) y = x.copy() quick_sort( y ) z = np.sort(x) np.testing.assert_array_equal( z, y ) y = x.copy() with Timer( 'nb' ): numba_fns.quick_sort( y ) with Timer( 'np' ): x = np.sort(x) UPDATE:I have re-written the function to force the looping part of the code to run in nopython mode. The while loop does not appear to be causing nopython to fail. However, I have not gained any performance improvement:@nb.autojitdef quick_sort2(list_): """ Iterative version of quick sort """ max_depth = 1000 left = 0 right = list_.shape[0]-1 i_stack_pos = 0 a_temp_stack = np.ndarray( ( max_depth, 2), dtype=np.int32 ) a_temp_stack[i_stack_pos,0] = left a_temp_stack[i_stack_pos,1] = right i_stack_pos+=1 #Main loop to pop and push items until stack is empty return _quick_sort2( list_, a_temp_stack, left, right )@nb.autojit( nopython=True )def _quick_sort2( list_, a_temp_stack, left, right ): i_stack_pos = 1 while i_stack_pos>0: i_stack_pos-=1 right = a_temp_stack[ i_stack_pos, 1 ] left = a_temp_stack[ i_stack_pos, 0 ] piv = partition(list_,left,right) #If items in the left of the pivot push them to the stack if piv-1 > left: a_temp_stack[ i_stack_pos, 0 ] = left a_temp_stack[ i_stack_pos, 1 ] = piv-1 i_stack_pos+=1 if piv+1 < right: a_temp_stack[ i_stack_pos, 0 ] = piv+1 a_temp_stack[ i_stack_pos, 1 ] = right [email protected]( nopython=True )def partition(list_, left, right): """ Partition method """ #Pivot first element in the array piv = list_[left] i = left + 1 j = right while 1: while i <= j and list_[i] <= piv: i +=1 while j >= i and list_[j] >= piv: j -=1 if j <= i: break #Exchange items list_[i], list_[j] = list_[j], list_[i] #Exchange pivot to the right position list_[left], list_[j] = list_[j], list_[left] return j
In general, if you don't force the nopython mode you have high chances of getting no performance improvement. Citing from the docs about nopython mode: [nopython] mode produces the highest performance code, but requires that the native types of all values in the function can be inferred, and that no new objects are allocatedTherefore your np.ndarray call is triggering object mode and hence slowing down the code. Try to allocate the work array from outside the function, like:def quick_sort(list_): max_depth = 1000 temp_stack_ = np.array( ( max_depth, 2), dtype=np.int32 ) _quick_sort(list_, temp_stack_)[email protected](nopython=True)def _quick_sort(list_, temp_stack_): ...
pymssql ( python module ) unable to use temporary tables This isn't a question, so much as a pre-emptive answer. (I have gotten lots of help from this website & wanted to give back.)I was struggling with a large bit of SQL query that was failing when I tried to run it via python using pymssql, but would run fine when directly through MS SQL. (E.g., in my case, I was using MS SQL Server Management Studio to run it outside of python.)Then I finally discovered the problem: pymssql cannot handle temporary tables. At least not my version, which is still 1.0.1.As proof, here is a snippet of my code, slightly altered to protect any IP issues:conn = pymssql.connect(host=sqlServer, user=sqlID, password=sqlPwd, \ database=sqlDB)cur = conn.cursor()cur.execute(testQuery)The above code FAILS (returns no data, to be specific, and spits the error "pymssql.OperationalError: No data available." if you call cur.fetchone() ) if I call it with testQuery defined as below:testQuery = """CREATE TABLE #TEST ([sample_id] varchar (256),[blah] varchar (256) )INSERT INTO #TESTSELECT DISTINCT [sample_id] ,[blah]FROM [myTableOI]WHERE [Shipment Type] in ('test')SELECT * FROM #TEST"""However, it works fine if testQuery is defined as below.testQuery = """SELECT DISTINCT [sample_id] ,[blah]FROM [myTableOI]WHERE [Shipment Type] in ('test')"""I did a Google search as well as a search within Stack Overflow, and couldn't find any information regarding the particular issue. I also looked under the pymssql documentation and FAQ, found at http://code.google.com/p/pymssql/wiki/FAQ, and did not see anything mentioning that temporary tables are not allowed. So I thought I'd add this "question".
Update: July 2016The previously-accepted answer is no longer valid. The second "will NOT work" example does indeed work with pymssql 2.1.1 under Python 2.7.11 (once conn.autocommit(1) is replaced with conn.autocommit(True) to avoid "TypeError: Cannot convert int to bool").
Python: how do you change the value of a global variable in a function? Why can't I change the variable can_answer to False without getting this error? This is just some quick code I wrote.import randomquestions = ["What's 1+1?", "What's 2+2?"]def question(): global can_answer can_answer = True print(random.choice(questions))def ans(answer): if can_answer: if can_answer == 2 or 4: print('correct') else: print('wrong') can_answer = False else: print('no questions to answer')
Use the global var before using the variableIn this case, guess you wrote wrong here if can_answer == 2 or 4:Isn't answerimport randomquestions = ["What's 1+1?", "What's 2+2?"]def question(): global can_answer can_answer = True print(random.choice(questions))def ans(answer): if can_answer: if can_answer == 2 or can_answer == 4: print('correct') else: print('wrong') can_answer = False else: print('no questions to answer')
Set dataframe column value based count of values and group by The problem:I have a basic python/pandas dataframe with a unit id ("Sarzs_no") and a column based on time of the day("Time_of_day", two values: day/night).Unfortunately the time of day is unambiguos in the terms of one unit can contain both values (day and night). However it should contain only one.I would like to have a solution for changing the time of day values for every unit, based on how many counts it has for day and night. If it has more counts for day than it should be set as day for all of its values, and vice versa.I tried to make a formula on this problem:def dayoftime(napszak_str): sarzs = row["Sarzs_no"] day = bfdataf[bfdataf["Sarzs_no"]==sarzs].groupby("Time_of_day").size()[0] night = bfdataf[bfdataf["Sarzs_no"]==sarzs].groupby("Time_of_day").size()[0] if day>=night: return "day" else: return "night"...and then call it:bfdataf["new_tod"] = bfdataf["Time_of_day"].apply(dayoftime)But unfortunatelly I get "index out of bound" errors.Could you please help me to solve this problem?Thank you!
You can get count per groups by GroupBy.size, create DataFrame with join and last create column by numpy.where:df = bfdataf.groupby(['Sarzs_no','Time_of_day']).size().unstack(fill_value=0)df = bfdataf.join(df, on='Sarzs_no')bfdataf['new_tod'] = np.where(df['day'] >= df['night'], 'day', 'night')Another solution is filter columns and get counts by sum per groups by transform:days = (bfdataf['Time_of_day'] =='day').groupby(bfdataf['Sarzs_no']).transform('sum')nights = (bfdataf['Time_of_day'] =='night').groupby(bfdataf['Sarzs_no']).transform('sum')bfdataf['new_tod'] = np.where( days >= nights, 'day', 'night')Another solution, thanks @Jon Clements is use idxmax for helper Series and create new column by map:s = bfdataf.groupby(['Sarzs_no','Time_of_day']).size().unstack(fill_value=0).idxmax(axis=1)bfdataf['new_tod'] = bfdataf['Sarzs_no'].map(s)print (bfdataf) Sarzs_no Time_of_day new_tod0 101/16 day day1 101/16 day day2 101/16 day day3 101/16 day day4 101/16 day day5 101/16 night day6 101/16 night day7 101/16 night day8 101/17 night night9 101/17 night night10 101/17 night night11 101/17 night night12 101/17 night night13 101/17 night night14 101/17 night night15 101/17 night night16 101/17 night night17 101/17 night night18 101/17 day night
Python - Key of max value in nested dict, generalization I would like to know how we can return the key value of nested dicts. The case of dict of dict (case 1) has already been answer elsewhere, but I do not manage to generaliseCase 1: dict of dictdict = {'key1': {'subkey1': value11, 'subkey2': value12, ...} 'key2': {'subkey1': value21, 'subkey2': value22, ...} ...}In order to get the key with the maximum 'subkey1' value I would do:max(dict, key=lambda x: dict[x].get('subkey1'))Case 2:dict = {'key1': {'subkey1': {'subsubkey1': value111, 'subsubkey2': value112, ...}} 'key2': {'subkey2': {'subsubkey1': value211, 'subsubkey2': value212, ...}}} ...}So my questions are:How can we generalise the formula of case1? if I want the 'key' of a maximum 'subsubkey'?In terms of performance, would another solution be more efficient than a 1-line formula?Thank you for your help and contribution
This answer assumes you know the path of the nested key. Then one possible view of case 2 is:((d.get(key)).get(subkey1)).get(subsubkey1)You want to apply function get in a cumulative way, notice that get can be exchanged with the operator [], so the above line can also be seen this way:((d[key])[subkey1])[subsubkey1]This is what the function reduce does, from the documentation: Apply function of two arguments cumulatively to the items of sequence, from left to right, so as to reduce the sequence to a single value. For example, reduce(lambda x, y: x+y, [1, 2, 3, 4, 5]) calculates ((((1+2)+3)+4)+5). The left argument, x, is the accumulated value and the right argument, y, is the update value from the sequence.So you can nest your calls in the following way:from functools import reduced = {'key1': {'subkey1': {'subsubkey1': 1, 'subsubkey2': 2}}, 'key2': {'subkey1': {'subsubkey1': 2, 'subsubkey2': 3}}}def value(first_key, di=None, path=None): lst = [di, first_key] + path return reduce(lambda x, y: x[y], lst)p = ['subkey1', 'subsubkey1']print(max(d, key=lambda k: value(k, d, p)))Outputkey2The generalization of key= comes from the value function. Basically the functions receives a the top level key, the dictionary and the paths of keys, then using reduce applies the call in a cumulative way.UPDATEIn a more generalized manner if you have different paths to the 'sub...subkey' for each of the top keys ['key1', 'key2', ...] you could use a dictionary of paths for each key in the following way:d = {'key1': {'subkey1': {'subsubkey1': 1, 'subsubkey2': 2}}, 'key2': {'subkey2': {'subsubkey1': 2, 'subsubkey2': 3}}}paths = { 'key1': ['subkey1', 'subsubkey1'], 'key2': ['subkey2', 'subsubkey1']}print(max(d, key=lambda k: value(k, d, paths[k])))Outputkey2Note that the value function remains the same only the path for each key is different. Also this solution gives you the added value of different path lengths for each top key.
Class doesn't recognize the attribute class Hand: def __int__(self): self.value= 0 self.ace=False self.Cards = [] def __str__(self): hand_comp="" for card in self.Cards: card_name=card.__str__() hand_comp+= " " + card_name return 'The card has %s' %(hand_comp) def card_add(self, card): '''Add another card to the hand''' self.Cards.append(card)Every time i run this, i get an error saying "Object 'Hand' has no attributes 'Cards' . How can i rectify this?
You are using wrong keyword for initializing classIt should be init not int.def __init__(self):
Python 3.6 - Save dictionary containing type 'bytes' to file I'm working on a project where I identify all the unique occurrences of fixed size blocks within a binary file and save then save the result to a binary file (it needs to work across multiple languages).My approach is the following: I read each block of the file, hash, and store the unique hashes and binary code to a dictionary. Each time the program sees a repeated hash, it appends the position for later reconstruction. An examples of the resulting dictionary is represented below:dict = {'d59fce39b5d8d4b278acbf2f5be0353c': [b'\xc5\xd7\x14\x84', 0, 1, 4], 'bf937a85a0f950f431a4c9c1aeca8686': [b'\x08\xe7\x07\x8f', 2, 3, 5]}Then, I'm using with open('out.data, 'wb') as f: to do save the file to disk (f.write(dict)), but I get the following error:TypeError: a bytes-like object is required, not 'dict'Other solutions I found here didn't help me. I tried passing the dictionary to a JSON object, as suggested here, but got:new_dict = json.dumps(dict)TypeError: Object of type 'bytes' is not JSON serializableI'm working with arbitrary bytes, thus, encoding does not seem like a solution for this issue.
Have you triedimport picklewith open('out.pickle', 'wb') as f: pickle.dump(dict, f, protocol=pickle.HIGHEST_PROTOCOL)with open('out.pickle', 'rb') as f: b_dict = pickle.load(f)# This is to check that you saved the same dict in memoryprint dict == b_dict
Store Instance of Class in String How can I store an instance of a class in a string? I tried eval, but that didn't work and threw SyntaxError. I would like this to work for user-defined classes and built-in classes (int, str, float).Code:class TestClass: def __init__(self, number): self.number = numberi = TestClass(14)str_i = str(i)print(eval(str_i)) # SyntaxErrorprint(eval("jkiji")) # Not Definedprint(eval("14")) # Works!print(eval("14.11")) # Works!
The convention is to return a string with which you could instantiate the same object, if at all reasonable, in the __repr__ method.class TestClass: def __init__(self, number): self.number = number def __repr__(self): return f'{self.__class__.__name__}({self.number})'Demo:>>> t = TestClass(14)>>> tTestClass(14)>>> str(t)'TestClass(14)'>>> eval(str(t))TestClass(14)(Of course, you should not actually use eval to reinstantiate objects in this way.)
AWS cdk python, which IAM role for a glue crawler with a daily trigger? I am trying to deploy a glue crawler for an s3. Unfortunately I cant manage to find an appropriate IAM role that allows the crawler to run. The permissions I need are just to read/write to S3, and logs:PutLogsEvent, but somehow I am not getting it right.Here is my code, it can be deployed but the crawler does not have permissions to run.from aws_cdk import ( aws_events as events, aws_lambda as lambda_, aws_events_targets as targets, aws_iam as iam, aws_glue as glue, core)class MyStack(core.Stack): def __init__(self, scope: core.Construct, id: str, **kwargs) -> None: super().__init__(scope, id, **kwargs) # what should I put in the role exactly? glue_role = iam.Role( self, 'Role__arn:aws:iam::aws:policy/service-role/AWSGlueServiceRole', assumed_by=iam.ServicePrincipal('glue.amazonaws.com'), ) glue_trigger = glue.CfnTrigger(self, "glue-daily-trigger", name = "etl-trigger", schedule = "cron(5 * * * ? *)", # every hour at X.05, every day type="SCHEDULED", actions=[ { "jobName": "glue_crawler-daily" } ], start_on_creation=True ) crawler_name = 'crawler_units_data' glue_crawler = glue.CfnCrawler( self, crawler_name, name=crawler_name, database_name='data_science', role=glue_role.role_arn,#'arn:aws:iam::aws:policy/service-role/AWSGlueServiceRole', targets={"s3Targets": [{"path": "s3://random_s3/units/"}]}, ) glue_trigger.add_depends_on(glue_crawler)I tried several things and translating code from javascript examples like this one but the methods being called from javascript do not map 100% with python.This role (created from the GUI) works correctly and has 2 policies.Policy to read and write from s3{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject" ], "Resource": [ "arn:aws:s3:::random_s3/units*" ] } ]}AWSGlueServicePolicy{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "glue:*", "s3:GetBucketLocation", "s3:ListBucket", "s3:ListAllMyBuckets", "s3:GetBucketAcl", "ec2:DescribeVpcEndpoints", "ec2:DescribeRouteTables", "ec2:CreateNetworkInterface", "ec2:DeleteNetworkInterface", "ec2:DescribeNetworkInterfaces", "ec2:DescribeSecurityGroups", "ec2:DescribeSubnets", "ec2:DescribeVpcAttribute", "iam:ListRolePolicies", "iam:GetRole", "iam:GetRolePolicy", "cloudwatch:PutMetricData" ], "Resource": [ "*" ] }, { "Effect": "Allow", "Action": [ "s3:CreateBucket" ], "Resource": [ "arn:aws:s3:::aws-glue-*" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject" ], "Resource": [ "arn:aws:s3:::aws-glue-*/*", "arn:aws:s3:::*/*aws-glue-*/*" ] }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::crawler-public*", "arn:aws:s3:::aws-glue-*" ] }, { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:*:*:/aws-glue/*" ] }, { "Effect": "Allow", "Action": [ "ec2:CreateTags", "ec2:DeleteTags" ], "Condition": { "ForAllValues:StringEquals": { "aws:TagKeys": [ "aws-glue-service-resource" ] } }, "Resource": [ "arn:aws:ec2:*:*:network-interface/*", "arn:aws:ec2:*:*:security-group/*", "arn:aws:ec2:*:*:instance/*" ] } ]}
As it turns out, I needed to pass the name and policy in a different way glue_role = iam.Role( self, 'glue_role_id2323', role_name = 'Rolename', assumed_by=iam.ServicePrincipal('glue.amazonaws.com'), managed_policies=[iam.ManagedPolicy.from_aws_managed_policy_name('service-role/AWSGlueServiceRole')] )
python get checkbutton value after end of mainloop I want to find out, which checkbuttons were checked after application is closed.If i save checkbuttons values in any collection, it's not possible to have an access to that collection after application is destroyed.app = Application(path_to_files)app.initialize(data)app.mainloop()#i want to know all checkbuttons values on this line checkerGUI.pyimport Tkinter as tkimport tkFontimport webbrowserimport osfrom PIL import ImageTk, Imageimport ctypesclass Application(tk.Frame): def __init__(self, pwd="", master=None): tk.Frame.__init__(self, master) self.initImages(pwd) self.master.resizable(width=False, height=False) self.index = 0 self.master.bind("<Return>", self.close) self.grid() self.games = [] self.gamesHiddenFlags = {} def close(self, event): self.master.destroy() def getGamesHiddenFlags(self): return self.gamesHiddenFlags def initialize(self, games): self.games = games for game in self.games: self.gamesHiddenFlags[game.name] = tk.BooleanVar() self.createWidgetsFromGame(game, self.gamesHiddenFlags[game.name]) def initImages(self, path): self.images = {} buf = Image.open(os.path.join(path, "images", "Classic.png")) buf = buf.resize((20, 20), Image.ANTIALIAS) # The (250, 250) is (height, width) self.images['Classic'] = ImageTk.PhotoImage(buf) buf = Image.open(os.path.join(path, "images", "Jeopardy.png")) buf = buf.resize((20, 20), Image.ANTIALIAS) self.images['Jeopardy'] = ImageTk.PhotoImage(buf) buf = Image.open(os.path.join(path, "images", "On-site.png")) buf = buf.resize((20, 20), Image.ANTIALIAS) self.images['On-site'] = ImageTk.PhotoImage(buf) buf = Image.open(os.path.join(path, "images", "On-line.png")) buf = buf.resize((20, 20), Image.ANTIALIAS) self.images['On-line'] = ImageTk.PhotoImage(buf) def google_link_callback(event, site): webbrowser.open_new(site) def ShowImages(self, frame_in, type_img, place_img): type_img = type_img.replace("Attack-Defense", "Classic").replace("Attack", "Classic") type_img = type_img.replace("Hack quest", "Jeopardy") label = tk.Label(frame_in, image=self.images[type_img]) label.pack(side="right") label = tk.Label(frame_in, image=self.images[place_img]) label.pack(side="right") def createWidgetsFromGame(self, game, flag): frame = tk.Frame(self, relief='sunken') frame.grid(row=0, column=self.index, sticky="WN") frame_in = tk.Frame(frame) frame_in.grid(row=0, sticky="WE", column=self.index) header = tk.Label(frame_in, anchor="nw", justify="left", text="Π˜Π³Ρ€Π°: ") header.pack(expand=True, fill="x", side="left") self.ShowImages(frame_in, game.type, game.place_type) header = tk.Label(frame, anchor="nw", justify="left", text="БостояниС: ") header.grid(row=1, sticky="WE", column=self.index) header = tk.Label(frame, anchor='nw', justify="left", text="Π”Π°Ρ‚Π° провСдСния: ", height=2) header.grid(row=3, sticky="WEN", column=self.index) header = tk.Label(frame, anchor="nw", justify="left", text="ΠŸΡ€ΠΎΠ΄ΠΎΠ»ΠΆΠΈΡ‚Π΅Π»ΡŒΠ½ΠΎΡΡ‚ΡŒ: ") header.grid(row=5, sticky="WE", column=self.index) header = tk.Label(frame, anchor="nw", justify="left", text="Π‘Π°ΠΉΡ‚ ΠΈΠ³Ρ€Ρ‹: ") header.grid(row=6, sticky="WE", column=self.index) header = tk.Label(frame, anchor="nw", justify="left", text="Π Π°Π½Π³: ") header.grid(row=7, sticky="WE", column=self.index) header = tk.Checkbutton(frame, text="НС ΠΏΠΎΠΊΠ°Π·Ρ‹Π²Π°Ρ‚ΡŒ: ", variable=flag) # There is variable header.grid(row=8, sticky="WE", column=self.index) self.index += 1 frame2 = tk.Frame(self, relief='sunken') frame2.grid(row=0, column=self.index, sticky="WN") header = tk.Label(frame2, anchor="nw", justify="left", text=game.name) header.grid(row=0, sticky="WE", column=self.index) header = tk.Label(frame2, anchor="nw", justify="left", text=game.state) header.grid(row=1, sticky="WE", column=self.index) header = tk.Label(frame2, anchor="nw", justify="left", text=game.date['start'].strftime("с %d %B Π² %H:%M")) header.grid(row=2, sticky="WE", column=self.index) header = tk.Label(frame2, anchor="nw", justify="left", text=game.date['end'].strftime("Π΄ΠΎ %d %B Π² %H:%M")) header.grid(row=3, sticky="WE", column=self.index) header = tk.Label(frame2, anchor="nw", justify="left", text="%d Π΄Π½Π΅ΠΉ %d часов" % (game.duration['days'], game.duration['hours'])) header.grid(row=4, sticky="WE", column=self.index) header = tk.Label(frame2, anchor="nw", justify="left", fg='blue', font=tkFont.Font(underline=1, size=10), cursor="hand2", text=game.site) header.bind("<Button-1>", lambda e: self.google_link_callback(game.site)) header.grid(row=5, sticky="WE", column=self.index) header = tk.Label(frame2, anchor="nw", justify="left", text=game.rank) header.grid(row=6, sticky="WE", column=self.index) self.index += 1
OK, I modified your code a bit. You will find explanations as comments inside the code. I added the protocol method (which you can call with self.master.protocol) and changed the close method, so that before it destroys the app it iterates through the checkbuttons and collects the flags in a directory, which then is converted to a global list. To test it I had to comment out the image part of your code and create my own game class to have a list of fake-games.Don't know whether my solution is elegant, but under my test conditions it worked. So after app.mainloop() try print(out), which will give you a list of zeroes and ones.Hope it helps.Ah, and please check the indention! This editor here did something strange to it, when I pasted my code. import Tkinter as tkimport tkFontimport webbrowserimport osfrom PIL import ImageTk, Imageimport ctypesclass Application(tk.Frame):def __init__(self, pwd="", master=None): tk.Frame.__init__(self, master) self.initImages(pwd) self.master.resizable(width=False, height=False) self.index = 0 self.master.bind("<Return>", self.close_by_keyboard)#changed self.master.protocol("WM_DELETE_WINDOW", self.close_by_mouse)#added self.grid() self.games = [] self.gamesHiddenFlags = {} self.flags = {} #collection of flags global out #variable will exist after Application object is destroyed out = [] #List of flags for later use#get the variable and exit in case you are closing with a mouse click:def close_by_mouse(self): self.get_variables() self.master.destroy()#same as above for closing with return key:def close_by_keyboard(self, event): self.get_variables() self.master.destroy()def get_variables(self): for i in self.flags: out.append(self.flags[i].get())def getGamesHiddenFlags(self): return self.gamesHiddenFlagsdef initialize(self, games): self.games = games for game in self.games: self.gamesHiddenFlags[game.name] = tk.BooleanVar() self.createWidgetsFromGame(game, self.gamesHiddenFlags[game.name])def initImages(self, path): self.images = {} buf = Image.open(os.path.join(path, "images", "Classic.png")) buf = buf.resize((20, 20), Image.ANTIALIAS) # The (250, 250) is (height, width) self.images['Classic'] = ImageTk.PhotoImage(buf) buf = Image.open(os.path.join(path, "images", "Jeopardy.png")) buf = buf.resize((20, 20), Image.ANTIALIAS) self.images['Jeopardy'] = ImageTk.PhotoImage(buf) buf = Image.open(os.path.join(path, "images", "On-site.png")) buf = buf.resize((20, 20), Image.ANTIALIAS) self.images['On-site'] = ImageTk.PhotoImage(buf) buf = Image.open(os.path.join(path, "images", "On-line.png")) buf = buf.resize((20, 20), Image.ANTIALIAS) self.images['On-line'] = ImageTk.PhotoImage(buf)def google_link_callback(event, site): webbrowser.open_new(site)def ShowImages(self, frame_in, type_img, place_img): type_img = type_img.replace("Attack-Defense", "Classic").replace("Attack", "Classic") type_img = type_img.replace("Hack quest", "Jeopardy") label = tk.Label(frame_in, image=self.images[type_img]) label.pack(side="right") label = tk.Label(frame_in, image=self.images[place_img]) label.pack(side="right")def createWidgetsFromGame(self, game, flag): frame = tk.Frame(self, relief='sunken') frame.grid(row=0, column=self.index, sticky="WN") frame_in = tk.Frame(frame) frame_in.grid(row=0, sticky="WE", column=self.index) header = tk.Label(frame_in, anchor="nw", justify="left", text="Π˜Π³Ρ€Π°: ") header.pack(expand=True, fill="x", side="left") self.ShowImages(frame_in, game.type, game.place_type) header = tk.Label(frame, anchor="nw", justify="left", text="БостояниС: ") header.grid(row=1, sticky="WE", column=self.index) header = tk.Label(frame, anchor='nw', justify="left", text="Π”Π°Ρ‚Π° провСдСния: ", height=2) header.grid(row=3, sticky="WEN", column=self.index) header = tk.Label(frame, anchor="nw", justify="left", text="ΠŸΡ€ΠΎΠ΄ΠΎΠ»ΠΆΠΈΡ‚Π΅Π»ΡŒΠ½ΠΎΡΡ‚ΡŒ: ") header.grid(row=5, sticky="WE", column=self.index) header = tk.Label(frame, anchor="nw", justify="left", text="Π‘Π°ΠΉΡ‚ ΠΈΠ³Ρ€Ρ‹: ") header.grid(row=6, sticky="WE", column=self.index) header = tk.Label(frame, anchor="nw", justify="left", text="Π Π°Π½Π³: ") header.grid(row=7, sticky="WE", column=self.index) self.flags[self.index]=tk.IntVar() header = tk.Checkbutton(frame, text="НС ΠΏΠΎΠΊΠ°Π·Ρ‹Π²Π°Ρ‚ΡŒ: ", variable=self.flags[self.index]) # There is variable header.grid(row=8, sticky="WE", column=self.index) self.index += 1 frame2 = tk.Frame(self, relief='sunken') frame2.grid(row=0, column=self.index, sticky="WN") header = tk.Label(frame2, anchor="nw", justify="left", text=game.name) header.grid(row=0, sticky="WE", column=self.index) header = tk.Label(frame2, anchor="nw", justify="left", text=game.state) header.grid(row=1, sticky="WE", column=self.index) header = tk.Label(frame2, anchor="nw", justify="left", text=game.date['start'].strftime("с %d %B Π² %H:%M")) header.grid(row=2, sticky="WE", column=self.index) header = tk.Label(frame2, anchor="nw", justify="left", text=game.date['end'].strftime("Π΄ΠΎ %d %B Π² %H:%M")) header.grid(row=3, sticky="WE", column=self.index) header = tk.Label(frame2, anchor="nw", justify="left", text="%d Π΄Π½Π΅ΠΉ %d часов" % (game.duration['days'], game.duration['hours'])) header.grid(row=4, sticky="WE", column=self.index) header = tk.Label(frame2, anchor="nw", justify="left", fg='blue', font=tkFont.Font(underline=1, size=10), cursor="hand2", text=game.site) header.bind("<Button-1>", lambda e: self.google_link_callback(game.site)) header.grid(row=5, sticky="WE", column=self.index) header = tk.Label(frame2, anchor="nw", justify="left", text=game.rank) header.grid(row=6, sticky="WE", column=self.index) self.index += 1</pre></code>
Python list can't delete first item I'm trying to create a list of text files from a directory so I can extract key data from them, however, the list my function returns also contains a list of the file pathways as the first item of the list. I've tried del full_text[0] which didn't work, as well as any other value, and also the remove function. Any ideas as to why this might be happening?Thanks import globfile_paths = []file_paths.extend(glob.glob("C:\Users\12342255\PycharmProjects\Sequence diagrams\*"))matching_txt = [s for s in file_paths if ".txt" in s]print matching_txtfull_text = []def fulltext(): for file in matching_txt: f = open(file, "r") ftext = f.read() all_seqs = ftext.split("title ") print all_seqsfull_text.append(fulltext())print full_text
You can use slicing to get rid of the first element - full_text[1:]. This creates a copy of the list. Otherwise, you can full_text.pop(0) and resume using full_text
How to create new PyQt4 windows from an existing window? I've been trying to call a new window from an existing one using python3 and Qt4.I've created two windows using Qt Designer (the main application and another one), and I've converted the .ui files generated by Qt Designer into .py scripts - but I can't seem to create new windows from the main application.I tried doing this:############### MAIN APPLICATION SCRIPT ################from PyQt4 import QtCore, QtGuiimport v2try: _fromUtf8 = QtCore.QString.fromUtf8except AttributeError: _fromUtf8 = lambda s: sclass Ui_Form(object): def setupUi(self, Form): Form.setObjectName(_fromUtf8("Form")) Form.resize(194, 101) self.button1 = QtGui.QPushButton(Form) self.button1.setGeometry(QtCore.QRect(50, 30, 99, 23)) self.button1.setObjectName(_fromUtf8("button1")) self.retranslateUi(Form) QtCore.QMetaObject.connectSlotsByName(Form) def retranslateUi(self, Form): Form.setWindowTitle(QtGui.QApplication.translate("Form", "Form", None, QtGui.QApplication.UnicodeUTF8)) self.button1.setText(QtGui.QApplication.translate("Form", "Ventana", None, QtGui.QApplication.UnicodeUTF8)) self.button1.connect(self.button1, QtCore.SIGNAL(_fromUtf8("clicked()")), self.mbutton1) def mbutton1(self): v2.main()if __name__ == "__main__": import sys app = QtGui.QApplication(sys.argv) Form = QtGui.QWidget() ui = Ui_Form() ui.setupUi(Form) Form.show() sys.exit(app.exec_())################## SECOND WINDOW #######################from PyQt4 import QtCore, QtGuitry: _fromUtf8 = QtCore.QString.fromUtf8except AttributeError: _fromUtf8 = lambda s: sclass Ui_Form(object): def setupUi(self, Form): Form.setObjectName(_fromUtf8("Form")) Form.resize(400, 300) self.label = QtGui.QLabel(Form) self.label.setGeometry(QtCore.QRect(160, 40, 57, 14)) self.label.setObjectName(_fromUtf8("label")) self.retranslateUi(Form) QtCore.QMetaObject.connectSlotsByName(Form) def retranslateUi(self, Form): Form.setWindowTitle(QtGui.QApplication.translate("Form", "Form", None, QtGui.QApplication.UnicodeUTF8)) self.label.setText(QtGui.QApplication.translate("Form", "LABEL 2", None, QtGui.QApplication.UnicodeUTF8))def main(): import sys app = QtGui.QApplication(sys.argv) Form = QtGui.QWidget() ui = Ui_Form() ui.setupUi(Form) Form.show() sys.exit(app.exec_())But I get this Error message: QCoreApplication::exec: The event loop is already running QPixmap: Must construct a QApplication before a QPaintDevice
Although pyuic can create executable scripts with the -x, --execute option, it is mainly intended for testing.The main purpose of pyuic is to create static python modules from Qt Desgner ui files that allow you to import the contained GUI classes into your application.Let's say you've created two ui files using Qt Designer and named them v1.ui and v2.ui.You would then create the two python modules like this:pyuic4 -o v1.py v1.uipyuic4 -o v2.py v2.uiNext, you would write a separate main.py script that imports the GUI classes from the modules, and creates instances of them as needed.So your main.py could look something like this:from PyQt4 import QtGuifrom v1 import Ui_Form1from v2 import Ui_Form2class Form1(QtGui.QWidget, Ui_Form1): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) self.setupUi(self) self.button1.clicked.connect(self.handleButton) self.window2 = None def handleButton(self): if self.window2 is None: self.window2 = Form2(self) self.window2.show()class Form2(QtGui.QWidget, Ui_Form2): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) self.setupUi(self)if __name__ == '__main__': import sys app = QtGui.QApplication(sys.argv) window = Form1() window.show() sys.exit(app.exec_())Note that I have changed the names of your GUI classes slightly to avoid namespace clashes. To give the GUI classes better names, just set the objectName property of the top-level class in Qt Desgner. And don't forget to re-run pyuic after you've made your changes!
Python - Should I use read-only @property without init or setter? Trying to get my head around property decorators. I found a solution posted for setting read-only attributes here. Setting a private attribute and then providing a @property getter method makes sense if you can specify the attribute in init. But what about the case where you want to use a function to calculate a read-only attribute? Let's say you have a class that calls an attribute (e.g. state) from another class and then calculates a new value that will be made available as an attribute:class MyState(object): def __init__(self, starting_value): self._foo = starting_value @property def foo(self): return self._foo @foo.setter def foo(self, value): self._foo = valueclass MyClass(object): def __init__(self, name=None): self.name = name @property def bar(self): state = MyState.foo return id(state)>mystate = MyState('chuff')>myclass = MyClass()>myclass.bar = 183097448LIn everything I have seen about property decorators, I have only see display methods reflected in the @property getter function, never functions that set the value of the variable. However, from reading the docs my understanding is that @setter requires an argument, which I don't have in this case. Is there any problem with calculating the read-only value of a class attribute in the @property getter method as opposed to simply passing an attribute that already exists?
There is no problem. @property is just doing less than you think. All it is is a bit of syntactic sugar to replace: a = foo.x with a = foo.x.getter(), and foo.x = bar with foo.x.setter(bar). That is, it allows you to replace attribute access with method calls. Those methods are allowed to do anything they like, which is the purpose of the property. I think you were being led astray by your first example where the property just passes through to an underlying hidden variable to make a psuedo-read-only variable. That is not really the standard use case. A very common example might be:class Rectangle(object): def __init__(self, w, h): self.w = w self.h = h @property def area(self): return self.w * self.hArea is a property of a rectangle, but it is derived from the width and height, and setting it doesn't really make any sense.
Is there any equivalent of Perl's XML::TreePP in Python? Perl's XML::TreePP is really good for XMP parsing/writing. Is there any equivalent class in Python?
import xml.etree.ElementTreesee http://docs.python.org/2/library/xml.etree.elementtree.html
Python not ignoring empty items in list I have this code to print some strings to a text file, but I need python to ignore every empty items, so it doesn't print empty lines.I wrote this code, which is simple, but should do the trick:lastReadCategories = open('c:/digitalLibrary/' + connectedUser + '/lastReadCategories.txt', 'w')for category in lastReadCategoriesList: if category.split(",")[0] is not "" and category is not None: lastReadCategories.write(category + '\n') print(category) else: print("/" + category + "/")lastReadCategories.close()I can see no problem with it, yet, python keeps printing the empty items to the file. All categories are written in this notation: "category,timesRead", that's why I ask python to see if the first string before the comma is not empty. Then I see if the whole item is not empty (is not None). In theory I guess it should work, right?P.S.: I've already tried asking the if to check if 'category' is not "" and is not " ", still, the same result.
Test for boolean truth instead, and reverse your test so that you are certain that .split() will work in the first place, None.split() would throw an exception:if category is not None and category.split(",")[0]:The empty string is 'false-y', there is no need to test it against anything.You could even just test for:if category and not category.startswith(','):for the same end result.From comments, it appears you have newlines cluttering up your data. Strip those away when testing:for category in lastReadCategoriesList: category = category.rstrip('\n') if category and not category.startswith(','): Β  Β  lastReadCategories.write(category + '\n') Β  Β  print(category) else: print("/{}/".format(category))Note that you can simply alter category inside the loop; this avoids having to call .rstrip() multiple times.
Can I trick numpy.histogram into behaving like numpy.bincount? So, I have lists of words and I need to know how often each word appears on each list. Using ".count(word)" works, but it's too slow (each list has thousands of words and I have thousands of lists). I've been trying to speed things up with numpy. I generated a unique numerical code for each word, so I could use numpy.bincount (since it only works with integers, not strings). But I get "ValueError: array is too big".So now I'm trying to tweak the "bins" argument of the numpy.histogram function to make it return the frequency counts I need (somehow numpy.histogram seems to have no trouble with big arrays). But so far no good. Anyone out there happens to have done this before? Is it even possible? Is there some simpler solution that I'm failing to see?
Don't use numpy for this. Use collections.Counter instead. It's designed for this use case.
Does this mean I have a Nvidia GPU? abigail@abilina:~/nlp$ lspci00:00.0 Host bridge: Intel Corporation Skylake Host Bridge/DRAM Registers (rev 07)00:01.0 PCI bridge: Intel Corporation Skylake PCIe Controller (x16) (rev 07)00:02.0 Display controller: Intel Corporation HD Graphics 530 (rev 06)00:14.0 USB controller: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller (rev 31)00:15.0 Signal processing controller: Intel Corporation Sunrise Point-H Serial IO I2C Controller #0 (rev 31)00:15.1 Signal processing controller: Intel Corporation Sunrise Point-H Serial IO I2C Controller #1 (rev 31)00:16.0 Communication controller: Intel Corporation Sunrise Point-H CSME HECI #1 (rev 31)00:17.0 RAID bus controller: Intel Corporation SATA Controller [RAID mode] (rev 31)00:1c.0 PCI bridge: Intel Corporation Sunrise Point-H PCI Express Root Port #2 (rev f1)00:1c.2 PCI bridge: Intel Corporation Sunrise Point-H PCI Express Root Port #3 (rev f1)00:1c.3 PCI bridge: Intel Corporation Sunrise Point-H PCI Express Root Port #4 (rev f1)00:1e.0 Signal processing controller: Intel Corporation Sunrise Point-H Serial IO UART #0 (rev 31)00:1f.0 ISA bridge: Intel Corporation Sunrise Point-H LPC Controller (rev 31)00:1f.2 Memory controller: Intel Corporation Sunrise Point-H PMC (rev 31)00:1f.3 Audio device: Intel Corporation Sunrise Point-H HD Audio (rev 31)00:1f.4 SMBus: Intel Corporation Sunrise Point-H SMBus (rev 31)**01:00.0 VGA compatible controller: NVIDIA Corporation GM107 [GeForce GTX 750 Ti] (rev a2)**01:00.1 Audio device: NVIDIA Corporation Device 0fbc (rev a1)02:00.0 USB controller: ASMedia Technology Inc. ASM1142 USB 3.1 Host Controller03:00.0 Network controller: Intel Corporation Wireless 3165 (rev 79)04:00.0 Ethernet controller: Qualcomm Atheros QCA8171 Gigabit Ethernet (rev 10)Is the Nvidia GeForce GTX 750 Ti a GPU that can speed up deep learning?
To answer the question in the title: yes, you have a NVIDIA GPU, a GeForce GTX 750 Ti. You can use this GPU with TensorFlow if you install CUDA (see here for a full list of GPUs that can be used with CUDA: https://developer.nvidia.com/cuda-gpus). A detailed list of the prerequirements for the use of TensorFlow with GPUs can be found here:https://www.tensorflow.org/install/install_linuxHave fun!
Add argparse arguments on the go So basically, I have 10-50 additional parameter configurations that I want to potentially send into the script via argparse - and I don't want to configure them all in the python script.It's only me running the script, so there is no security issue. Is there any way I could call my script withpython myScript.py -parameter value -parameter2 value2 -parameter30 value30without having set up any of the parameter as arguments in my script? Or anything else with the same effect -- the **kwargs analogue of functions?
Where are your configuration parameters defined ? If your program has a list of such parameters, you can always use a loop to add all parameters.for parameter_name in parameter_names: argparser.add_argument( '--' + parameter_name, action='store', metavar='<string>' )There is no way to auto-parse unknown options, because there is no way whether you want 0, 1 or many arguments associated with an unknown option. However you can delay parsing of unknown options using parse_known_args.
QSS not applied to first QSizeGrip in (composite widget) in QVBoxLayout As shown below, the widget TestWidget contains a QFrame and a CSS-styled QSizeGrip. Several TestWidget instances are placed in a QVBoxLayoutfrom PySide import QtGui, QtCoreimport sysclass TestWidget(QtGui.QWidget): def __init__(self , parent=None): super(TestWidget , self).__init__(parent) layout = QtGui.QVBoxLayout() layout.setContentsMargins( 0 , 0 , 0 , 0 ) frame = QtGui.QFrame() frame.setFrameShape(QtGui.QFrame.StyledPanel) frame.setMinimumHeight( 100 ) grip = QtGui.QSizeGrip(self) grip.setStyleSheet( "QSizeGrip { image: url(dots.png); }") grip.setCursor(QtCore.Qt.SplitVCursor) layout.addWidget(frame) layout.addWidget( grip , 0 , QtCore.Qt.AlignBottom | QtCore.Qt.AlignRight ) self.setLayout(layout)class TestApp(QtGui.QMainWindow): def __init__(self, parent=None): super(TestApp, self).__init__(parent) track1 = TestWidget() track2 = TestWidget() track3 = TestWidget() centralWidget = QtGui.QWidget() layout = QtGui.QVBoxLayout(centralWidget) layout.addWidget(track1) layout.addWidget(track2) layout.addWidget(track3) self.setCentralWidget(centralWidget) self.show() if __name__=="__main__": app=QtGui.QApplication(sys.argv) myapp = TestApp(); sys.exit(app.exec_()) As shown below, the size grip of the first TestWidget in the QVBoxLayout appears only if the TestWidget is the only element in the layout.Qt version 4.8.7PySide version 1.2.2The PySide2 version of the program (below) has the same issuefrom PySide2 import QtCorefrom PySide2.QtWidgets import QApplication, QWidget , QMainWindow , QGraphicsView , QVBoxLayout , QFrame , QSizeGrip , QWidgetimport sysclass TestWidget(QWidget): def __init__(self , parent=None): super(TestWidget , self).__init__(parent) layout = QVBoxLayout() layout.setContentsMargins( 0 , 0 , 0 , 0 ) frame = QFrame() frame.setFrameShape(QFrame.StyledPanel) frame.setMinimumHeight( 100 ) grip = QSizeGrip(self) grip.setStyleSheet( "QSizeGrip { image: url(dots.png); }") grip.setCursor(QtCore.Qt.SplitVCursor) layout.addWidget(frame) layout.addWidget( grip , 0 , QtCore.Qt.AlignBottom | QtCore.Qt.AlignRight ) self.setLayout(layout)class TestApp(QMainWindow): def __init__(self, parent=None): super(TestApp, self).__init__(parent) track1 = TestWidget() track2 = TestWidget() track3 = TestWidget() centralWidget = QWidget() layout = QVBoxLayout(centralWidget) layout.addWidget(track1) layout.addWidget(track2) layout.addWidget(track3) self.setCentralWidget(centralWidget) self.show() if __name__=="__main__": app = QApplication(sys.argv) myapp = TestApp(); sys.exit(app.exec_()) PySide2 version 5.12.1
What is observed is a predetermined behavior but not documented, if the source code is revised it will be observed:Qt::Corner QSizeGripPrivate::corner() const{ Q_Q(const QSizeGrip); QWidget *tlw = qt_sizegrip_topLevelWidget(const_cast<QSizeGrip *>(q)); const QPoint sizeGripPos = q->mapTo(tlw, QPoint(0, 0)); bool isAtBottom = sizeGripPos.y() >= tlw->height() / 2; bool isAtLeft = sizeGripPos.x() <= tlw->width() / 2; if (isAtLeft) return isAtBottom ? Qt::BottomLeftCorner : Qt::TopLeftCorner; else return isAtBottom ? Qt::BottomRightCorner : Qt::TopRightCorner;}Where it is observed that the sizeGrip is placed in the upper part if it is in the upper part of the window, and this is the cause of the behavior that it observes.The workaround is to overwrite the paintEvent method of QSizeGrip:PySide2:import sysfrom PySide2 import QtCore, QtGui, QtWidgetsclass SizeGrip(QtWidgets.QSizeGrip): def paintEvent(self, event): painter = QtGui.QPainter(self) opt = QtWidgets.QStyleOptionSizeGrip() opt.initFrom(self) opt.corner = QtCore.Qt.BottomRightCorner self.style().drawControl(QtWidgets.QStyle.CE_SizeGrip, opt, painter, self)class TestWidget(QtWidgets.QWidget): def __init__(self , parent=None): super(TestWidget , self).__init__(parent) layout = QtWidgets.QVBoxLayout(self) layout.setContentsMargins( 0 , 0 , 0 , 0 ) frame = QtWidgets.QFrame() frame.setFrameShape(QtWidgets.QFrame.StyledPanel) frame.setMinimumHeight( 100 ) grip = SizeGrip(self) grip.setStyleSheet('''QSizeGrip { image: url(dots.png); }''') layout.addWidget(frame) layout.addWidget(grip , 0 , QtCore.Qt.AlignBottom | QtCore.Qt.AlignRight )class TestApp(QtWidgets.QMainWindow): def __init__(self, parent=None): super(TestApp, self).__init__(parent) centralWidget = QtWidgets.QWidget() layout = QtWidgets.QVBoxLayout(centralWidget) for _ in range(3): layout.addWidget(TestWidget()) self.setCentralWidget(centralWidget) self.show()if __name__=="__main__": app = QtWidgets.QApplication(sys.argv) myapp = TestApp(); sys.exit(app.exec_()) PySide:import sysfrom PySide import QtCore, QtGuiclass SizeGrip(QtGui.QSizeGrip): def paintEvent(self, event): painter = QtGui.QPainter(self) opt = QtGui.QStyleOptionSizeGrip() opt.initFrom(self) opt.corner = QtCore.Qt.BottomRightCorner self.style().drawControl(QtGui.QStyle.CE_SizeGrip, opt, painter, self)class TestWidget(QtGui.QWidget): def __init__(self , parent=None): super(TestWidget , self).__init__(parent) layout = QtGui.QVBoxLayout(self) layout.setContentsMargins( 0 , 0 , 0 , 0 ) frame = QtGui.QFrame() frame.setFrameShape(QtGui.QFrame.StyledPanel) frame.setMinimumHeight( 100 ) grip = SizeGrip(self) grip.setStyleSheet('''QSizeGrip { image: url(dots.png); }''') grip.setCursor(QtCore.Qt.SplitVCursor) layout.addWidget(frame) layout.addWidget(grip , 0 , QtCore.Qt.AlignBottom | QtCore.Qt.AlignRight )class TestApp(QtGui.QMainWindow): def __init__(self, parent=None): super(TestApp, self).__init__(parent) centralWidget = QtGui.QWidget() layout = QtGui.QVBoxLayout(centralWidget) for _ in range(3): layout.addWidget(TestWidget()) self.setCentralWidget(centralWidget) self.show()if __name__=="__main__": app = QtGui.QApplication(sys.argv) myapp = TestApp(); sys.exit(app.exec_())
Save Pandas df containing long list as csv file I am trying to save a pandas dataframe as .csv file. Currently my code looks like this:with open('File.csv', 'a') as f: df.to_csv(f, header=False)The saving works but the problem is that the lists in my dataframe are just compressed to [first,second,...,last] and all the entries in the middle are discarded. If I just look at the original dataframe all entries are there. Is there any way how I can convert the list to a string which contains all the elements (str(df) also discards the middle elements) or how I can save a full numpy array in a cell of a csv table?Thank you for your help,Viviane
I had issues while saving dataframes too. I had a dataframe in which some columns consisted of lists as its elements. When I saved the datfarme using df.to_csv and then read it from disk using df.read_csv, the list and arrays were turned into a string of characters. Hence [1,2,3] was transformed to '[1,2,3]'. When I used HDF5 format the problem was solved. If you dataframe is called df_temp, then you can use:store = pd.HDFStore('store.h5')store['df'] = df_tempto save the dataframe in HDF5 format and you can read it using the following command:store = pd.HDFStore('store.h5')df_temp_read = store['df']You can look at this answer. I should also mention that pickle did not work for me, since I lost the column names when reading from the file. Maybe I did something wrong, but apart from that, pickle can cause compatibility issues if you plan to read the file in different python versions.
How to correctly specify the type of a variable in Python in order to prevent unresolved reference in PyCharm? I have a function:def foo(path): """ :param path: Path to a folder :type path: pathlib.Path """ new_path = path / 'tmp' return new_pathwhich gets a pathlib.Path object and adds 'tmp' at the end of this path. But PyCharm shows an "unresolved reference under path in new_path = path / 'tmp'.As it is obvious, this won't happen if the type of such variables is built-in. Note that this can be resolved if I import from pathlib import Path and change the def foo(path) to def foo(path: Path). But I want to know if there is any way to do this without unnecessary imports. I read about Python typing but can't find a solution.
In the below example PyCharm does issue a warning Unresolved reference 'Path' because in the annotation the type Path that type hints argument_path has not been imported from pathlib. The warning is issued both for the argument and the variable.def foo(argument_path: Path): """ :param path: Path to a folder :type path: Path """ new_path: Path = argument_path / 'tmp' return new_pathThe obvious way to solve this is using from pathlib import Path at the top of the module.from pathlib import Pathdef foo(argument_path: Path): new_path: Path = argument_path / 'tmp'But I want to know if there is any way to do this without unnecessary imports.There are two ways to do this without an import, either use the fully qualified name in the annotation pathlib.Pathdef foo(argument_path: pathlib.Path): new_path: pathlib.Path = argument_path / 'tmp'Or don't use type annotations and simply specify the type in the docstring.PyCharm's static type checker doesn't warn you if you declare a type that can't be resolved inside the docstring. Instead, PyCharm will issue a warning when you try to use the function. In the below example no typehints were used in the signature or variable declaration, the type is only specified in the docstring.def foo3(argument_path): """ :param argument_path: Path to a folder :type argument_path: pathlib.Path """ new_path = path / 'tmp' return new_path# PyCharm will issue this warning:foo3("a_string") # Expected type 'Path', got 'str' insteadFinally you are using reStructuredText syntax in the docstrings, this option should be specified in the project settings under: Settings > Tools > Python Integrated Tools > DocString Format
Filling dataframe with average of previous columns values I have a dataframe with having 5 columns with having missing values.How do i fill the missing values with taking the average of previous two column values.Here is the sample code for the same.coh0 = [0.5, 0.3, 0.1, 0.2,0.2] coh1 = [0.4,0.3,0.6,0.5]coh2 = [0.2,0.2,0.3]coh3 = [0.8,0.8]coh4 = [0.5]df= pd.DataFrame({'coh0': pd.Series(coh0), 'coh1': pd.Series(coh1),'coh2': pd.Series(coh2), 'coh3': pd.Series(coh3),'coh4': pd.Series(coh4)})dfHere is the sample output coh0coh1coh2coh3coh40 0.5 0.4 0.2 0.8 0.51 0.3 0.3 0.2 0.8 NaN2 0.1 0.6 0.3 NaN NaN3 0.2 0.5 NaN NaN NaN4 0.2 NaN NaN NaN NaNHere is the desired result i am looking for.The NaN value in each column should be replaced by the previous two columns average value at the same position. However for the first NaN value in second column, it will take the default last value of first column.The sample desired output would be like below.
For the exception you named, the first NaN, you can dodf.iloc[1, -1] = df.iloc[0, -1]though it doesn't make a difference in this case as the mean of .2 and .8 is .5, anyway.Either way, the rest is something like a rolling window calculation, except it has to be computed incrementally. Normally, you want to vectorize your operations and avoid iterating over the dataframe, but IMHO this is one of the rarer cases where it's actually appropriate to loop over the columns (cf. this excellent post), i.e.,compute the row-wise (axis=1) mean of up to two columns left of the current one (df.iloc[:, max(0, i-2):i]),and fill its NaN values from the resulting series.for i in range(1, df.shape[1]): mean_df = df.iloc[:, max(0, i-2):i].mean(axis=1) df.iloc[:, i] = df.iloc[:, i].fillna(mean_df)which results in coh0 coh1 coh2 coh3 coh40 0.5 0.4 0.20 0.800 0.50001 0.3 0.3 0.20 0.800 0.50002 0.1 0.6 0.30 0.450 0.37503 0.2 0.5 0.35 0.425 0.38754 0.2 0.2 0.20 0.200 0.2000
find all files matching exact name with and without an extension I'm using glob to scan a specified directory to find all files matching the specified name, but I can't seem to get it to work with files with no extension without finding files matching the name and then some...For example, here's some files:- file- file2- file.datThe resulting list should be:[ 'file', 'file.dat' ]How can I get glob to work as expected??
Shortly after posting this question, I thought of the answer, but gave up the phone before I could post it...So instead of relying on glob to find the royal all files, have it only look for files with extensions.Here's how to validate if glob is even needed:path = 'subdirectory/filename' # no extensionfiles = [ path ] # for consistancyif not os.path.exists( path ): files = glob('%s.*'%path) if not files: raise IOError("no files found")for f in files: # do whateverThis should work with most names, including weirdly formatted names.
Django - Two Users Accessing The Same Data Let's say that I have a Django web application with two users. My web application has a global variable that exist on the server (a Pandas Dataframe created from data from an external SQL database).Let's say that a user makes an update request to that Dataframe and now that Dataframe is being updated. As the Dataframe is being updated, the other user makes a get request for that Dataframe. Is there a way to 'lock' that Dataframe until user 1 is finished with it and then finish the request made by user 2? EDIT:So the order of events should be:User 1 makes an update request, Dataframe is locked, User 2 makes a get request, Dataframe is finished updating, Dataframe is unlocked, User 2 gets his/her request.Lines of code would be appreciated!
Ehm... Django is not a server. It has a single-threaded development server in it, but it should not be used for anything beyond development and maybe not even for that. Django applications are deployed using WSGI. WSGI server running your app is likely to start several separate worker threads and will be killing and restarting these threads according to the rules in its configuration.This means, that you cannot rely on multiple requests hitting the same process. Django app lifecycle is between getting a request and returning a response. Anything that is not explicitly made persistent between those two events should be considered gone.So, when one of your users updates a global variable, this variable only exists in the one process this user randomly accessed. The second user might or might not hit the same process and therefore might or might not get the same copy of the variable. More than that, the process will sooner or later be killed by the WSGI server and all the updates will be gone.What I am getting at is that you might want to rethink your architecture before you bother with the atomic update problems.
Django's authentication backends change I have to change the Django's authentication backend (the default is django.contrib.auth.AuthenticationBackend) to one of my own. The problem is that since Django stores the authentication backend for a requested user in the session, it throws errors to me when I try to use the new backend. The option is to delete all the session information. Is there a better way to do this? Or else, what is the most preferred way?
Look at the Pinax project's account auth_backends , there it replaces with own one. I think Pinax code helps you while changing Django's authentication backend.
OpenCV: how to restart a video when it finishes? I'm playing a video file, but how to play it again when it finishes?Javier
If you want to restart the video over and over again (aka looping it), you can do it by using an if statement for when the frame count reaches cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT) and then resetting the frame count and cap.set(cv2.cv.CV_CAP_PROP_POS_FRAMES, num) to the same value. I'm using OpenCV 2.4.9 with Python 2.7.9 and the below example keeps looping the video for me.import cv2cap = cv2.VideoCapture('path/to/video') frame_counter = 0while(True): # Capture frame-by-frame ret, frame = cap.read() frame_counter += 1 #If the last frame is reached, reset the capture and the frame_counter if frame_counter == cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT): frame_counter = 0 #Or whatever as long as it is the same as next line cap.set(cv2.cv.CV_CAP_PROP_POS_FRAMES, 0) # Our operations on the frame come here gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # Display the resulting frame cv2.imshow('frame',gray) if cv2.waitKey(1) & 0xFF == ord('q'): break# When everything done, release the capturecap.release()cv2.destroyAllWindows()It also works to recapture the video instead of resetting the frame count:if frame_counter == cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT): frame_counter = 0 cap = cv2.VideoCapture(video_name)
String is in array returning strange results I'm creating a dynamic array of UUIDs, and I have another list of existing UUIDs, I want to delete items from the existing list that aren't in the new dynamic list. I'm trying to do this like this# Remove components that aren't being updatednew_component_id_for_existing_sections = []for component in new_components: if component.get('section_holder_id'): new_component_id_for_existing_sections.append(component.get('component_id')) print('New component IDs') for com in new_component_id_for_existing_sections: print(com) print('Checking existing components') for existing_component in self.get_object().components.all(): print(existing_component.component.id) print(existing_component.component.id not in new_component_id_for_existing_sections)So in here I create the array new_component_id_for_existing_sections which in my example has two IDs in, and self.get_object().components.all() has 3 ids in. But the output for this gives me.New component IDsacae9374-d32d-4752-ba5a-9437a54dbbe7a2a9d893-86ba-4d1d-938e-f638e7b2a4b2Checking existing componentsf3cb3cc6-4d66-4df5-8232-2c1f858c8632 <----Not in the array but returns that it isTrue acae9374-d32d-4752-ba5a-9437a54dbbe7Truea2a9d893-86ba-4d1d-938e-f638e7b2a4b2TrueThe first item is saying it's in the array, but it isn't and I can't figure out why
Turns out this was a type issue, existing_component.component.id is a UUID, and the array items are strings, and it didn't like comparing UUID -> Strings.Adding this solved the issue existing_component.component.id.__str__()
Pyqt5 QtableWidget and integrated combobox fails to call a function when combobox items change I have the below code :import sysfrom PyQt5.QtGui import *from PyQt5.QtCore import *from PyQt5.QtWidgets import *class tabdemo(QMainWindow): def __init__(self): super(tabdemo, self).__init__() self.setGeometry(50,50,500,500) self.centralWidget = QWidget() self.setCentralWidget(self.centralWidget) self.table() self.mainHBOX_param_scene = QHBoxLayout() self.mainHBOX_param_scene.addWidget(self.tableWidget) self.centralWidget.setLayout(self.mainHBOX_param_scene) def table(self): self.tableWidget = QTableWidget() self.tableWidget.setColumnCount(2) self.tableWidget.setRowCount(5) attr = ['one', 'two', 'three', 'four', 'five'] i = 0 for j in attr: self.tableWidget.setItem(i, 0, QTableWidgetItem(j)) combobox = QComboBox() for txt in ["Sinus","Triangle","Square"]: combobox.addItem(txt) self.tableWidget.setCellWidget(i, 1, combobox) i += 1 self.tableWidget.itemChanged.connect(self.Table_itemchanged) def Table_itemchanged(self): print('Changed')def main(): app = QApplication(sys.argv) ex = tabdemo() ex.show() sys.exit(app.exec_())if __name__ == '__main__': main()It just puts a Qtablewidget with one column being labels and another column being comboboxes.What I'm trying to do is when I modify the value of whatever combobox, I want call a function which, in this case, print 'changed' in the console.the signal connection self.tableWidget.itemChanged.connect(self.Table_itemchanged) works fine when I edit the label part of the QtableWidget but it fails when I modify the combobox item.Does anyone could explain me why and if it exists a solution to my issue.Thank you all.
Use the currentIndexChanged signal of the combobox.You can set a property on the combobox to store and recover which row (and column if you want) it belongs to. for j in attr: self.tableWidget.setItem(i, 0, QTableWidgetItem(j)) combobox = QComboBox() for txt in ["Sinus","Triangle","Square"]: combobox.addItem(txt) combobox.setProperty('row', i) combobox.currentIndexChanged.connect(self.Combo_indexchanged) self.tableWidget.setCellWidget(i, 1, combobox) i += 1and add a new slot:def Combo_indexchanged(self): combo = self.sender() row = combo.property('row') index = combo.currentIndex() print('combo row %d indexChanged to %d' % (row, index))
Add error if no vowel detected in input string I'm writing a program that is supposed to take an input and output what the most common vowel is as seen here:while True: string = input("Enter a line of text: ") vowel = "aeiouAEIOU" x = Counter(c for c in string.upper() if c in vowel) most = {k: x[k] for k in x if x[k] == max(x.values())} for i in most: vowel = i y = most [i] print("The most frequently occurring vowel in the string is: " ,vowel, "with ,y, "occurrences.") breakBut I can't figure out how to have an error message if there are no vowels in the input. I have tried:if vowel != string: print("Error, no vowels were detected in the user input.") continueBut this doesn't work. If I put it before the section where it outputs the most common vowel, then no matter what is input the error message shows and the input starts again. If I put it after that, then the vowels are detected and most common is printed, but it continues to display the error message and restart the input instead of breaking the program.How can I write the error so that it looks at the input to see if there are any vowels in there and displays the error if there aren't any?
Since you already have a counter of all vowels (x) it would be a waste to check (again) whether user input contains vowels. You could simply check that x is empty (i.e., that it has not counted any vowels):if not x: print("Error, no vowels were detected in the user input.") continueIn addition, consider either dropping .upper() from c for c in string.upper() if c in vowel OR dropping lower case letters from vowel = "aeiouAEIOU". Keeping both is unnecessary.
Optimizing DB queries I need to verify existence of entities in the database. If db_is_team_exist(team_id): If db_is_user_exist_by_id(user_id): ok else: raise ObjectDoesNotExist("user", user_id) else: raise ObjectDoesNotExist("team", team_id)Query functions in the database:def db_is_team_exist(team_id: str) -> bool: cursor.execute(f "SELECT COUNT(1) FROM teams WHERE id='{team_id}';") return bool(cursor.fetchone()[0])def db_is_user_exist_by_id(user_id: str) -> bool: cursor.execute(f "SELECT COUNT(1) FROM users WHERE id='{user_id}';") return bool(cursor.fetchone()[0])Sometimes there are too many of these checks for me to afford such a load on the database. Is there any way to check existence of two+ entities which are in different tables with one query? Or reduce the number of queries in the database in another way?
You can technically use a single query like:SELECT 1 WHERE EXISTS (SELECT 'x' FROM teams WHERE id='{team_id}') AND EXISTS (SELECT 'x' FROM users WHERE id='{user_id}')Please use proper parameterized queries though.
Python/Selenium/Chromedriver: the script opens just a blank Google Chrome page I have a problem with a browser automation script on a specific windows 7 machine. The code is written in python 3.7.4 with Selenium and Chromedriver. When I run it from a command line only Chrome browser starts but it does not open the url. This problem occurs only on one windows 7 machine and I can't figure out its reason. I've tried to run the script with both disabled firewall and antivirus, but unfortunately these measures don't help. Also there are no any error output in the command line.I thought that something is preventing the script from connecting to the internet but python scripts with urllib.request run without any problems. The script works fine on Fedora 30 and Debian 10. I've also tested it on windows 10 and windows 7 via Gnome Boxes: everything was ok. The original code is about 3 000 lines, so here's a small sample I've written from scratch:from selenium import webdriverbrowser = webdriver.Chrome(executable_path = 'webdriver/chromedriver.exe')print('Starting')browser.get('https://google.com')So when I run the script, nothing happens besides the opening of a blank page in Chrome. And "print" is not executed too.I've stored "browser" variable in separate file. When I run the script with this variable in the same file I've got the following error message:DevTools listening on ws://127.0.0.1:27046/devtools/browser/1ecf2c8f-c0cb-44d7-927d-cfa3901f645bTraceback (most recent call last):File "test-no-conf.py", line 5, in <module>executable_path = 'webdriver/chromedriver.exe'File "C:\Users\К\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 81, in __init__desired_capabilities=desired_capabilities)File "C:\Users\К\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 157, in __init__self.start_session(capabilities, browser_profile)File "C:\Users\К\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 252, in start_sessionresponse = self.execute(Command.NEW_SESSION, parameters)File "C:\Users\К\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in executeself.error_handler.check_response(response)File "C:\Users\К\AppData\Local\Programs\Python\Python37\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_responseraise exception_class(message, screen, stacktrace)selenium.common.exceptions.SessionNotCreatedException: Message: session not createdfrom disconnected: Unable to receive message from renderer(Session info: chrome=77.0.3865.120)Thank you in advance.
Check the Chrome browser version you have installed in the machine and compare it with the Chrome driver version.You can learn more about these changes here and download latest drivers here.In particular these instructions:"Here are the steps to select the version of ChromeDriver to download:First, find out which version of Chrome you are using. Let's say you have Chrome 72.0.3626.81.Take the Chrome version number, remove the last part, and append the result to URL "https://chromedriver.storage.googleapis.com/LATEST_RELEASE_". For example, with Chrome version 72.0.3626.81, you'd get a URL "https://chromedriver.storage.googleapis.com/LATEST_RELEASE_72.0.3626".Use the URL created in the last step to retrieve a small file containing the version of ChromeDriver to use. For example, the above URL will get your a file containing "72.0.3626.69". (The actual number may change in the future, of course.)Use the version number retrieved from the previous step to construct the URL to download ChromeDriver. With version 72.0.3626.69, the URL would be "https://chromedriver.storage.googleapis.com/index.html?path=72.0.3626.69/".After the initial download, it is recommended that you occasionally go through the above process again to see if there are any bug fix releases."If this doesn't resolve this error, make sure all previous chrome and driver instances are closed.
My Pygame messagetoscreen function doesnt show text I am following a youtube tutorial series, and I came across this problem with my pygame game. I made a function called messagetoscreen, and it works fine in the video, but it doesn't work for me.Here is my code:#Importsimport pygamepygame.init()pygame.font.init()import sysimport randomimport cx_Freezeimport time#Variablesplayerhealth = 100black = (0, 0, 0)white = (255, 255, 255)red = (255, 0, 0)green = (0 ,255, 0)windowtitle = "Climber"sizex = 1000sizey = 700rect1x = 500rect1y = 350rect1sizex = 50rect1sizey = 50rect1xchange = 0rect1ychange = 0clock = pygame.time.Clock()fps = 600font = pygame.font.SysFont(None, 25)#Functionsdef messagetoscreen (msg,color): screentext = font.render(msg, True, color) gamedisplay.blit(screentext , [sizex / 2, sizey / 2])#Initializationgamedisplay = pygame.display.set_mode((sizex, sizey))pygame.display.set_caption(windowtitle)#Game Loopwhile True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() quit() if event.type == pygame.KEYDOWN: if event.key == pygame.K_w: rect1ychange -= 1 elif event.key == pygame.K_a: rect1xchange -= 1 elif event.key == pygame.K_s: rect1ychange += 1 elif event.key == pygame.K_d: rect1xchange += 1 if event.type == pygame.KEYUP: if event.key == pygame.K_w: rect1ychange = 0 elif event.key == pygame.K_a: rect1xchange = 0 elif event.key == pygame.K_s: rect1ychange = 0 elif event.key == pygame.K_d: rect1xchange = 0 if rect1y > 650: rect1y = 650 if rect1x > 950: rect1x = 950 if rect1y < 0: rect1y = 0 if rect1x < 0: rect1x = 0 rect1x += rect1xchange rect1y += rect1ychange messagetoscreen("HAPPY", red) gamedisplay.fill(white) pygame.draw.rect(gamedisplay, green, (rect1x, rect1y, rect1sizex, rect1sizey)) pygame.display.update() clock.tick(fps)I would like to know how to fix my function, and anything else that could cause an error.
The answer is that I blitted the text to the screen before I filled gamedisplay with white. Remember to blit text after drawing a background.Here is the code that draw graphicsin the while loop:gamedisplay.fill(white)pygame.draw.rect(gamedisplay, green, (rect1x, rect1y, rect1sizex, rect1sizey))messagetoscreen("HAPPY", red)
computing rolling averages by integer days in pandas I have taken some data from a csv and put it into a dataframe:from pandas import read_csvdf = read_csv('C:\...', delimiter = ',', encoding = 'utf-8')df2 = df.groupby(['i-j','day'])['i-j'].agg({'count'})I would like to calculate for each 'i-j' the seven day moving average of their count. First I think I need to add the days with zero count to the table. Is there an easy way to do this by modifying my code above? In other words I would like missing values to count as 0.Then I would need to add another column to the dataframe that calculates the average of count for each i-j for the previous seven-days. Do I need to convert the days to something that pandas recognizes as a date value in order to use some of the rolling statistical functions? Or can I just change the type of the 'date' column and proceed.Many thanks!
There may be a better way to do this, but given your starting DataFrame of df2 the following should work.First reindex df2 to fill in the missing days with zeros:new_index = pd.MultiIndex.from_product([df2.index.get_level_values(0).unique(), range(31)])df2 = df2.reindex(new_index, fill_value=0)(I'm assuming you want 31 days, but you can change this as necessary.)Now if you unstack this reindexed DataFrame and take the transpose, you have a DataFrame where each column is an entry of i-j and contains the counts per day:df2.unstack().TYou can calculate the rolling mean of this DataFrame:rm = pd.rolling_mean(df2.unstack().T, 7)To finish, you can stack this frame of rolling means to get back to the shape of the original reindexed df2:rm.T.stack(dropna=False)
Python Multiprocessing Pipe hang i'm trying to build a program to send a string to process Tangki and Tangki2 then send a bit of array data each to process Outdata, but it seems not working correctly. but when i disable gate to the Outdata everything works flawlessly. this is the example code:import osfrom multiprocessing import Process, Pipefrom time import sleepimport cv2def outdata(input1,input2): while(1): room=input1.recv() room2=input2.recv()def tangki(keran1,selang1): ##============tangki1 a=None x,y,degree,tinggi=0,0,0,0 dout=[] while(1): frame=keran1.recv() dout.append([x,y,degree,tinggi]) selang1.send(dout) print ("received from: {}".format(frame))def tangki2(keran3,selang2): ##=================tangki2 x,y,degree,tinggi=0,0,0,0 dout2=[] while(1): frame=keran3.recv() dout2.append([x,y,degree,tinggi]) selang2.send(dout2) print("received from: {}".format(frame))def pompa(gate1,gate2): count=0 while(1): count+=1 gate1.send("gate 1, val{}".format(count)) gate2.send("gate 2, val{}".format(count))if __name__ == '__main__': pipa1, pipa2 = Pipe() pipa3, pipa4 = Pipe() tx1,rx1 = Pipe() tx2,rx2 = Pipe() ptangki = Process(target=tangki, args=(pipa2, tx1)) ptangki2 = Process (target=tangki2, args=(pipa4, tx2)) ppompa = Process(target=pompa, args=(pipa1,pipa3)) keran = Process(target=outdata, args=(rx1,rx2)) ptangki.start() ptangki2.start() ppompa.start() keran.start() ptangki.join() ptangki2.join() ppompa.join() keran.join()at exact count reach 108 the process hang, not responding whatsoever. when i TOP it, the python3 process has gone, it seems that selang1 and selang2 causing the problem. i've search in google and it might be a Pipe Deadlock. so the question is how to prevent this from happening since i've already dump all data in pipe via repeated reading both in input1 and input2.Edit: it seems that the only problem was the communication between tangki and tangki2 to outdata
it's actually because buffer size limit? but adding dout=[x,y,degree,tinggi] and dout=[x,y,degree,tinggi] reset the size of data to minimal, or by assigning dout=[0,0,0,0] and dout2=[0,0,0,0] right after selang1.send(dout) and selang2.send(dout2)
PyCharm does not recognized a module in the project I ran in the following problem:I created a project in Pycharm and my PyCharm does not recognized a modules from the packages. I use Python 3.6Please see the screenshot:
Try to do the following steps:Go to File->Settings (it'll open a window). In the right box, go to Project->Project structure. In the left click in + add content root and pick the folder module you'd like to add. After that, mark this folder as SourceYou're gonna see something like that:In this same window, go to Project Interpreter. In the top right click in the gear icon and show all. In the right, there are some icons, click in the last one show paths for the selected interpreted. Now, click on the plus icon and insert the path for your project.This screen looks like that:
Google OAuth2 - Error: redirect_uri_mismatch I'm trying to run this project https://github.com/googleapis/python-analytics-dataI create new client OAuth 2.0 in Cloud Platform and I have the client_secret_code and I add uri http://localhost to the client OAuth settingsbut I have this error: redirect_uri_mismatch
redirect_uri_mismatchThis is a configuration issue. The redirect uri you added in google cloud console for your project must exactly match the one that your code is sending.THe easiest solution is to check the error message it should tell you the Redirect uri that is missing simply add that in Google developer console.Google OAuth2: How the fix redirect_uri_mismatch error.
Call a Python function from NodeJS I need to call a python function from a NodeJS service code. I checked this link and wrote below nodeJS codeconst express = require('express')const app = express()let runPy = new Promise(function(success, nosuccess) { const { spawn } = require('child_process'); const pyprog = spawn('python', ['./ml.py']); pyprog.stdout.on('data', function(data) { success(data); }); pyprog.stderr.on('data', (data) => { nosuccess(data); });});app.get('/', (req, res) => { res.write('welcome\n'); runPy.then(function(testMLFunction) { console.log(testMLFunction.toString()); res.end(testMLFunction); });})app.listen(4000, () => console.log('Application listening on port 4000!'))Suppose I have a sample python code in ml.py like belowdef testMLFunction(): return "hello from Python"Now when I run the nodeJS code and do a GET through Curl, I only see the message 'welcome' which is the console log of GET endpoint. But I don't see the message that is returned from the python function anywhere. What am I missing?
I guess you misunderstand the process of calling python. You are not calling python function in node directly but call the process to run a shell command that run ./ml.py and collect the console output.If that's clear then the problem is obvious, you define the function in python but you forget to call it.def testMLFunction(): return "hello from Python"print(testMLFunction())By the way, I guess you are going to refer a machine learning algorithm as it is named as ML.If that's the case, calling it from shell may be inefficient as loading a model could be time consuming and it will be loaded whenever you call it.Instead, I recommend you also build a python server, and access these predicted result via internal http request.
2 download click button with same class using selenium python I am unable to click on 2 download buttons with same button class. below is the codefile=driver.find_element_by_xpath("(//button[@class='MuiButtonBase-root MuiIconButton-root IconButton-sc-iv40hv-1 cLszZl IconButton-sc-iv40hv-0 hbJxSM DownloadButton-sc-19l7ggt-0 gHtfyl MuiIconButton-colorPrimary'])")driver.execute_script("arguments[0].click();", file)this works only for 1st button..2nd button if add 2nd file with index val 2... only second button worksI need to click on both download buttons one after another..
To get the specific button try with class and index OR try with text and index whichever you comfortable.Example you have 2 buttons like thisButton 1 :<button class="expedition_button awesome-button " onclick="attack(null, '2', 1, 0, '')">Attack</button>Button 2 :<button class="expedition_button awesome-button " onclick="attack(null, '2', 2, 0, '')">Attack</button>you can finish with thisdriver.find_element_by_xpath("(//button[text()[contains(.,'Attack')]])[indexval]") driver.find_element_by_xpath("(//button[@class='expedition_button awesome-button '])[indexval]")And then Similar for Button 2 , 3 & 4 just increase the index value.For Button1:driver.find_element_by_xpath("(//button[text()[contains(.,'Attack')]])[1]")ORdriver.find_element_by_xpath("(//button[@class='expedition_button awesome-button '])[1]")
Ordering by subquery in SQLAlchemy I'm trying to select the newest threads (Thread) ordered descending by the time of the most recent reply to them (the reply is a Post model, that's a standard forum query). In SQL I'd write it like this:SELECT * FROM thread AS t ORDER BY (SELECT MAX(posted_at) FROM post WHERE thread_id = t.id) DESCHow do I do such thing in SQLAlchemy? I tried something like this:scalar = db.select[func.max(Post.posted_at)].where(Post.thread_id == Thread.id).as_scalar()threads = Thread.query.order_by(scalar.desc()).all()But it seems that I don't understand how scalars work. Reading docs for the 5th time won't help. Could someone help me write such query in SQLAlchemy? I use flask-sqlalchemy and MySQL for this app.
looks fine to me, here's a test:from sqlalchemy import *from sqlalchemy.orm import *from sqlalchemy.ext.declarative import declarative_baseBase = declarative_base()class Thread(Base): __tablename__ = 'thread' id = Column(Integer, primary_key=True)class Post(Base): __tablename__ = 'post' id = Column(Integer, primary_key=True) thread_id = Column(Integer, ForeignKey('thread.id')) posted_at = Column(String)s = Session()scalar = select([func.max(Post.posted_at)]).where(Post.thread_id == Thread.id).as_scalar()q = s.query(Thread).order_by(scalar.desc())print qoutput (note we're just printing the SQL here):SELECT thread.id AS thread_id FROM thread ORDER BY (SELECT max(post.posted_at) AS max_1 FROM post WHERE post.thread_id = thread.id) DESClooks pretty much like your query
Python subprocess: wait for command to finish before starting next one? I've written a Python script that downloads and converts many images, using wget and then ImageMagick via chainedsubprocess calls: for img in images: convert_str = 'wget -O ./img/merchant/download.jpg %s; ' % img['url'] convert_str += 'convert ./img/merchant/download.jpg -resize 110x110 ' convert_str += ' -background white -gravity center -extent 110x110' convert_str += ' ./img/thumbnails/%s.jpg' % img['id'] subprocess.call(convert_str, shell=True)If I run the content of convert_str manually at the command line, it appears to work without any errors, but if I run the script so it executes repeatedly, it sometimes gives me the following output: --2013-06-19 04:01:50-- http://www.lkbennett.com/medias/sys_master/8815507341342.jpgResolving www.lkbennett.com... 157.125.69.163Connecting to www.lkbennett.com|157.125.69.163|:80... connected.HTTP request sent, awaiting response... 200 OKLength: 22306 (22K) [image/jpeg]Saving to: `/home/me/webapps/images/m/img/merchant/download.jpg' 0K .......... .......... . 100% 1.03M=0.02s2013-06-19 04:01:50 (1.03 MB/s) - `/home/annaps/webapps/images/m/img/merchant/download.jpg' saved [22306/22306]/home/annaps/webapps/images/m/img/merchant/download.jpg [Errno 2] No such file or directory: ' /home/annaps/webapps/images/m/img/merchant/download.jpg'Oddly, despite the No such file or directory message, the images generally seem to have downloaded and converted OK. But occasionally they look corrupt, with black stripes on them (even though I'm using the latest version of ImageMagick), which I assume is because they aren't completely downloaded before the command executes. Is there any way I can say to Python or to subprocess: "don't run the second command until the first has definitely completed successfully?". I found this question but can't see a clear answer!
Normally, subprocess.call is blocking.If you want non blocking behavior, you will use subprocess.Popen. In that case, you have to explicitly use Popen.wait to wait for the process to terminate.See https://stackoverflow.com/a/2837319/2363712BTW, in shell, if you wish to chain process you should use && instead of ; -- thus preventing the second command to be launched if the first one failed. In addition, you should test the subprocess exit status in your Python program in order to determine if the command was successful or not.
How to use a pandas interval to lookup values, to fill another dataframe I have two dataframes (df1, df2):x id 35 4 55 392 299 5andid x val1 (0.0, 50.0] 1.22 (90.0, inf] 0.53 (0.0, 50.0] 8.93 (50.0, 90.0] 9.94 (0.0, 50.0] 4.34 (50.0, 90.0] 1.14 (90.0, inf] 2.95 (50.0, 90.0] 3.25 (90.0, inf] 5.1Want to add a new column x_new in the first dataframe, df1, which values depends on the lookup-table from the second dataframe, df2. According to the id and the value of x, there is a special multiplier, to get the new value x_new: x id x_new 35 4 35*4.3 55 3 55*9.9 92 2 ... 99 5 ...The value ranges in the second dataframe were created with a pandas cut:df2 = df.groupby(['id', pd.cut(df.x, [0,50,90,np.inf])]).apply(lambda x: np.average(x['var1']/x['var2'], weights=x['var1'])).reset_index(name='val')My idea is starting with the pandas built in lookup function:df1['x_new'] = df.lookup(df.index, df['id'])Don't know how to get it work.Also see my previous question for more information about the code.
A value can be found in a pd.Interval40 in pd.Interval(0.0, 50.0, closed='right') evaluates as TrueLikewise, if a pd.Interval is in a index, a value passed using .loc, will find the correct interval.df2.loc[(3, 35)] will return 8.9Since df2 is multi-indexed, the values for the index, are passed as a tuple.A KeyError will occur if a value from df1 doesn't exist in the index of df2, so you may need to write a function with try-except.df1_in_df2 = df1[df1.id.isin(df2.index.get_level_values(0))] will find all df1.id in df2.indeximport pandas as pdimport numpy as np# setupt dataframesdf1 = pd.DataFrame({'id': [4, 3, 2, 5], 'x': [35, 55, 92, 99]})df2 = pd.DataFrame({'id': [1, 2, 3, 3, 4, 4, 4, 5, 5], 'x': [pd.Interval(0.0, 50.0, closed='right'), pd.Interval(90.0, np.inf, closed='right'), pd.Interval(0.0, 50.0, closed='right'), pd.Interval(50.0, 90.0, closed='right'), pd.Interval(0.0, 50.0, closed='right'), pd.Interval(50.0, 90.0, closed='right'), pd.Interval(90.0, np.inf, closed='right'), pd.Interval(50.0, 90.0, closed='right'), pd.Interval(90.0, np.inf, closed='right')], 'val': [1.2, 0.5, 8.9, 9.9, 4.3, 1.1, 2.9, 3.2, 5.1]})# set id and x as the index of df2df2 = df2.set_index(['id', 'x'])# display(df2) valid x 1 (0.0, 50.0] 1.22 (90.0, inf] 0.53 (0.0, 50.0] 8.9 (50.0, 90.0] 9.94 (0.0, 50.0] 4.3 (50.0, 90.0] 1.1 (90.0, inf] 2.95 (50.0, 90.0] 3.2 (90.0, inf] 5.1# use a lambda expression to pass id and x of df1 as index labels to df2 and return valdf1['val'] = df1.apply(lambda x: df2.loc[(x['id'], x['x'])], axis=1)# multiple x and val to get x_newdf1['x_new'] = df1.x.mul(df1.val)# display(df1) id x val x_new0 4 35 4.3 150.51 3 55 9.9 544.52 2 92 0.5 46.03 5 99 5.1 504.9